给定一个数据框:
+---+-----------+---------+-------+------------+ | id| score|tx_amount|isValid| greeting| +---+-----------+---------+-------+------------+ | 1| 0.2| 23.78| true| hello_world| | 2| 0.6| 12.41| false|byebye_world| +---+-----------+---------+-------+------------+
我想使用输入 Dataframe 的类型将这些列分解为名为“col_value”的行。
df.dtypes [('id', 'int'), ('model_score', 'double'), ('tx_amount', 'double'), ('isValid', 'boolean'), ('greeting', 'string')]
预期输出:
+---+------------+--------+---------+----------+-------+---------+ | id| col_value|is_score|is_amount|is_boolean|is_text|col_name | +---+------------+--------+---------+----------+-------+---------+ | 1| 0.2| Y| N| N| N|score | | 1| 23.78| N| Y| N| N|tx_amount| | 1| true| N| N| Y| N|isValid | | 1| hello_world| N| N| N| Y|greeting | | 2| 0.6| Y| N| N| N|score | | 2| 12.41| N| Y| N| N|tx_amount| | 2| false| N| N| Y| N|isValid | | 2|byebye_world| N| N| N| Y|greeting | +---+------------+--------+---------+----------+-------+---------+
我目前拥有的:
df.withColumn("cols", F.explode(F.arrays_zip(F.array("score", "tx_amount", "isValid", "greeting")))) \ .select("id", F.col("cols.*")) \ ...
但是,当我尝试压缩要在爆炸中使用的列时,它给出了有关类型的错误:
pyspark.sql.utils.AnalysisException: "cannot resolve 'array(`id`, `model_score`, `tx_amount`, `isValid`, `greeting`)' due to data type mismatch: input to function array should all be the same type, but it's [int, double, double, boolean, string]
当输入列的类型可能完全不同时,我该怎么做?
Sample DataFrame:
df.show() df.printSchema() +---+-----------+---------+-------+------------+ | id|model_score|tx_amount|isValid| greeting| +---+-----------+---------+-------+------------+ | 1| 0.2| 23.78| true| hello_world| | 2| 0.6| 12.41| false|byebye_world| +---+-----------+---------+-------+------------+ root |-- id: integer (nullable = true) |-- model_score: double (nullable = true) |-- tx_amount: double (nullable = true) |-- isValid: boolean (nullable = true) |-- greeting: string (nullable = true)
我尝试让它对任何列输入保持动态。它将采用来自的类型,df.dtypes[1:]因为id不包含在其中,col_value这就是为什么skipping it(1:)。Array只接受same type它,这就是为什么我们在应用逻辑之前将所有列转换为字符串。我认为它应该适用于您的用例。您可以Y/N从这里构建您的列。
df.dtypes[1:]
id
col_value
skipping it(1:)
Array
same type
Y/N
df.select([F.col(c).cast("string") for c in df.columns])\ .withColumn("cols", F.explode(F.arrays_zip(F.array([F.array(x[0],F.lit(x[1]),F.lit(x[0]))\ for x in df.dtypes[1:]]))))\ .select("id", F.col("cols.*")).withColumn("col_value", F.element_at("0",1))\ .withColumn("col_type", F.element_at("0",2))\ .withColumn("col_name", F.element_at("0",3)).drop("0").show() +---+------------+--------+-----------+ | id| col_value|col_type| col_name| +---+------------+--------+-----------+ | 1| 0.2| double|model_score| | 1| 23.78| double| tx_amount| | 1| true| boolean| isValid| | 1| hello_world| string| greeting| | 2| 0.6| double|model_score| | 2| 12.41| double| tx_amount| | 2| false| boolean| isValid| | 2|byebye_world| string| greeting| +---+------------+--------+-----------+