Passing Array To Spark Lit Function
Let's say I have a numpy array a that contains the numbers 1-10: [1 2 3 4 5 6 7 8 9 10] I also have a Spark dataframe to which I want to add my numpy array a. I figure that a colum
Solution 1:
List comprehension inside Spark's array
a = [1,2,3,4,5,6,7,8,9,10]
df = spark.createDataFrame([['a b c d e f g h i j '],], ['col1'])
df = df.withColumn("NewColumn", F.array([F.lit(x) for x in a]))
df.show(truncate=False)
df.printSchema()
# +--------------------+-------------------------------+# |col1 |NewColumn |# +--------------------+-------------------------------+# |a b c d e f g h i j |[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]|# +--------------------+-------------------------------+# root# |-- col1: string (nullable = true)# |-- NewColumn: array (nullable = false)# | |-- element: integer (containsNull = false)
@pault commented (Python 2.7):
You can hide the loop using
map
:df.withColumn("NewColumn", F.array(map(F.lit, a)))
@ abegehr added Python 3 version:
df.withColumn("NewColumn", F.array(*map(F.lit, a)))
Spark's udf
# Defining UDFdef arrayUdf():
return a
callArrayUdf = F.udf(arrayUdf, T.ArrayType(T.IntegerType()))
# Calling UDF
df = df.withColumn("NewColumn", callArrayUdf())
Output is the same.
Solution 2:
In scala API, we can use "typedLit" function to add the Array or map values in the column.
// Ref : https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.sql.functions$
Here is the sample code to add an Array or Map as a column value.
import org.apache.spark.sql.functions.typedLit
val df1 = Seq((1, 0), (2, 3)).toDF("a", "b")
df1.withColumn("seq", typedLit(Seq(1,2,3)))
.withColumn("map", typedLit(Map(1 -> 2)))
.show(truncate=false)
// Output
+---+---+---------+--------+
|a |b |seq |map |
+---+---+---------+--------+
|1 |0 |[1, 2, 3]|[1 -> 2]|
|2 |3 |[1, 2, 3]|[1 -> 2]|
+---+---+---------+--------+
I hope this helps.
Post a Comment for "Passing Array To Spark Lit Function"