我想从包含单词列表的DataFrame转换为每个单词都在其自己行中的DataFrame。
如何在DataFrame中的列上爆炸?
这是我尝试的一些示例,您可以在其中取消注释每个代码行并获取以下注释中列出的错误。我在带有Spark 1.6.1的Python 2.7中使用PySpark。
from pyspark.sql.functions import split, explode DF = sqlContext.createDataFrame([('cat \n\n elephant rat \n rat cat', )], ['word']) print 'Dataset:' DF.show() print '\n\n Trying to do explode: \n' DFsplit_explode = ( DF .select(split(DF['word'], ' ')) # .select(explode(DF['word'])) # AnalysisException: u"cannot resolve 'explode(word)' due to data type mismatch: input to function explode should be array or map type, not StringType;" # .map(explode) # AttributeError: 'PipelinedRDD' object has no attribute 'show' # .explode() # AttributeError: 'DataFrame' object has no attribute 'explode' ).show() # Trying without split print '\n\n Only explode: \n' DFsplit_explode = ( DF .select(explode(DF['word'])) # AnalysisException: u"cannot resolve 'explode(word)' due to data type mismatch: input to function explode should be array or map type, not StringType;" ).show()
请指教
explode和split是SQL函数。两者都在SQL上运行Column。split将Java正则表达式作为第二个参数。如果要在任意空格上分离数据,则需要这样的操作:
explode
split
Column
df = sqlContext.createDataFrame( [('cat \n\n elephant rat \n rat cat', )], ['word'] ) df.select(explode(split(col("word"), "\s+")).alias("word")).show() ## +--------+ ## | word| ## +--------+ ## | cat| ## |elephant| ## | rat| ## | rat| ## | cat| ## +--------+