![在Pyspark中使用UDF函数时,密集向量应为哪种类型?[重复],第1张 在Pyspark中使用UDF函数时,密集向量应为哪种类型?[重复],第1张](/aiimages/%E5%9C%A8Pyspark%E4%B8%AD%E4%BD%BF%E7%94%A8UDF%E5%87%BD%E6%95%B0%E6%97%B6%EF%BC%8C%E5%AF%86%E9%9B%86%E5%90%91%E9%87%8F%E5%BA%94%E4%B8%BA%E5%93%AA%E7%A7%8D%E7%B1%BB%E5%9E%8B%EF%BC%9F%5B%E9%87%8D%E5%A4%8D%5D.png)
您可以将vector和VectorUDT与UDF结合使用,
from pyspark.ml.linalg import Vectors, VectorUDTfrom pyspark.sql import functions as Fud_f = F.udf(lambda r : Vectors.dense(r),VectorUDT())df = df.withColumn('b',ud_f('a'))df.show()+-------------------------+---------------------+|a |b |+-------------------------+---------------------+|[0.1, 0.2, 0.3, 0.4, 0.5]|[0.1,0.2,0.3,0.4,0.5]|+-------------------------+---------------------+df.printSchema()root |-- a: array (nullable = true) | |-- element: double (containsNull = true) |-- b: vector (nullable = true)关于VectorUDT,http:
//spark.apache.org/docs/2.2.0/api/python/_modules/pyspark/ml/linalg.html
欢迎分享,转载请注明来源:内存溢出
微信扫一扫
支付宝扫一扫
评论列表(0条)