parsing the csv file with multiline fields in pyspark
Facing an issue while reading the test2.csv file in pyspark.
Test file test1.csv
a1^b1^c1^d1^e1
a2^"this is having
multiline data1
multiline data2"^c2^d2^e2
a3^b3^c3^d3^e3
a4^b4^c4^d4^e4
Test file test2.csv
a1^b1^c1^d1^e1
a2^this is having
multiline data1
multiline data2^c2^d2^e2
a3^b3^c3^d3^e3
a4^b4^c4^d4^e4
Below is the Code
schema = StructType([
StructField("A", StringType()),
StructField("B", StringType()),
StructField("C", StringType()),
StructField("D", StringType()),
StructField("E", StringType())
])
Creating the dataframe for the above 2 csv files.
df1=spark.read.csv("s3_path/test1.csv",schema=schema,inferSchema=True,multiLine=True,sep='^')
df1.show(10,False)
print ('df1.count() is: ', df1.count())
Below is the output when I read the test1.csv file
+---+-----------------------------------------------+---+---+---+
|A |B |C |D |E |
+---+-----------------------------------------------+---+---+---+
|a1 |b1 |c1 |d1 |e1 |
|a2 |this is having
multiline data1
multiline data2|c2 |d2 |e2 |
|a3 |b3 |c3 |d3 |e3 |
|a4 |b4 |c4 |d4 |e4 |
+---+-----------------------------------------------+---+---+---+
df1.count() is: 4
df2 = spark.read.csv("s3_path/test2.csv",schema=schema,inferSchema=True,multiLine=True,sep='^')
df2.show(10,False)
print ('df2.count() is: ', df2.count())
Below is the output when I read the test2.csv file
+---------------+---------------+----+----+----+
|A |B |C |D |E |
+---------------+---------------+----+----+----+
|a1 |b1 |c1 |d1 |e1 |
|a2 |this is having |null|null|null|
|multiline data1|null |null|null|null|
|multiline data2|c2 |d2 |e2 |null|
|a3 |b3 |c3 |d3 |e3 |
|a4 |b4 |c4 |d4 |e4 |
+---------------+---------------+----+----+----+
df2.count() is: 6
Source Files:
If we see the difference in source files. test1.csv has "
at the beginning and end of the multiline data. But test2.csv doesnt have that.
Issue Description: Column B, 2nd row has multiline data. If we see the output of the df2, it has 6 records, here spark is reading it as new record which is not correct
.
The output of the df1 has 4 records, the multiline data in column B 2nd row is treated as one string which is correct
.
Question : Can someone help to fix the code to read the test2.csv file as well correctly.
regex apache-spark pyspark apache-spark-sql
add a comment |
Facing an issue while reading the test2.csv file in pyspark.
Test file test1.csv
a1^b1^c1^d1^e1
a2^"this is having
multiline data1
multiline data2"^c2^d2^e2
a3^b3^c3^d3^e3
a4^b4^c4^d4^e4
Test file test2.csv
a1^b1^c1^d1^e1
a2^this is having
multiline data1
multiline data2^c2^d2^e2
a3^b3^c3^d3^e3
a4^b4^c4^d4^e4
Below is the Code
schema = StructType([
StructField("A", StringType()),
StructField("B", StringType()),
StructField("C", StringType()),
StructField("D", StringType()),
StructField("E", StringType())
])
Creating the dataframe for the above 2 csv files.
df1=spark.read.csv("s3_path/test1.csv",schema=schema,inferSchema=True,multiLine=True,sep='^')
df1.show(10,False)
print ('df1.count() is: ', df1.count())
Below is the output when I read the test1.csv file
+---+-----------------------------------------------+---+---+---+
|A |B |C |D |E |
+---+-----------------------------------------------+---+---+---+
|a1 |b1 |c1 |d1 |e1 |
|a2 |this is having
multiline data1
multiline data2|c2 |d2 |e2 |
|a3 |b3 |c3 |d3 |e3 |
|a4 |b4 |c4 |d4 |e4 |
+---+-----------------------------------------------+---+---+---+
df1.count() is: 4
df2 = spark.read.csv("s3_path/test2.csv",schema=schema,inferSchema=True,multiLine=True,sep='^')
df2.show(10,False)
print ('df2.count() is: ', df2.count())
Below is the output when I read the test2.csv file
+---------------+---------------+----+----+----+
|A |B |C |D |E |
+---------------+---------------+----+----+----+
|a1 |b1 |c1 |d1 |e1 |
|a2 |this is having |null|null|null|
|multiline data1|null |null|null|null|
|multiline data2|c2 |d2 |e2 |null|
|a3 |b3 |c3 |d3 |e3 |
|a4 |b4 |c4 |d4 |e4 |
+---------------+---------------+----+----+----+
df2.count() is: 6
Source Files:
If we see the difference in source files. test1.csv has "
at the beginning and end of the multiline data. But test2.csv doesnt have that.
Issue Description: Column B, 2nd row has multiline data. If we see the output of the df2, it has 6 records, here spark is reading it as new record which is not correct
.
The output of the df1 has 4 records, the multiline data in column B 2nd row is treated as one string which is correct
.
Question : Can someone help to fix the code to read the test2.csv file as well correctly.
regex apache-spark pyspark apache-spark-sql
I guess column qualifier (single or double quotes) is made necessary for a csv file for the purpose of escaping the delimeter itself. But you can give a try for the "quote" option in theread.csv
method.
– Prakash S
Nov 20 at 10:29
@PrakashSoptions quote
parameter is not solving the purpose here, It works like below. Test file:a1^b1^c1^d1^e1 a2^$its a single line ^data$^c2^d2^e2 a3^b3^c3^d3^e3 a4^b4^c4^d4^e4
spark.read.option('quote','$').option("multiLine","true").option("inferSchema", "true").schema(schema).option("sep","^").option("header", "false").csv("s3://aria-preprod.snowflake.stg/pca/volatility/spark_issue/testy.csv").show(10,False)
– user10678179
Nov 21 at 19:07
Output: ` +---+---------------------+---+---+---+ |A |B |C |D |E | +---+---------------------+---+---+---+ |a1 |b1 |c1 |d1 |e1 | |a2 |its a single line ^data|c2 |d2 |e2 | |a3 |b3 |c3 |d3 |e3 | |a4 |b4 |c4 |d4 |e4 | +---+---------------------+---+---+---+ `
– user10678179
Nov 21 at 19:18
add a comment |
Facing an issue while reading the test2.csv file in pyspark.
Test file test1.csv
a1^b1^c1^d1^e1
a2^"this is having
multiline data1
multiline data2"^c2^d2^e2
a3^b3^c3^d3^e3
a4^b4^c4^d4^e4
Test file test2.csv
a1^b1^c1^d1^e1
a2^this is having
multiline data1
multiline data2^c2^d2^e2
a3^b3^c3^d3^e3
a4^b4^c4^d4^e4
Below is the Code
schema = StructType([
StructField("A", StringType()),
StructField("B", StringType()),
StructField("C", StringType()),
StructField("D", StringType()),
StructField("E", StringType())
])
Creating the dataframe for the above 2 csv files.
df1=spark.read.csv("s3_path/test1.csv",schema=schema,inferSchema=True,multiLine=True,sep='^')
df1.show(10,False)
print ('df1.count() is: ', df1.count())
Below is the output when I read the test1.csv file
+---+-----------------------------------------------+---+---+---+
|A |B |C |D |E |
+---+-----------------------------------------------+---+---+---+
|a1 |b1 |c1 |d1 |e1 |
|a2 |this is having
multiline data1
multiline data2|c2 |d2 |e2 |
|a3 |b3 |c3 |d3 |e3 |
|a4 |b4 |c4 |d4 |e4 |
+---+-----------------------------------------------+---+---+---+
df1.count() is: 4
df2 = spark.read.csv("s3_path/test2.csv",schema=schema,inferSchema=True,multiLine=True,sep='^')
df2.show(10,False)
print ('df2.count() is: ', df2.count())
Below is the output when I read the test2.csv file
+---------------+---------------+----+----+----+
|A |B |C |D |E |
+---------------+---------------+----+----+----+
|a1 |b1 |c1 |d1 |e1 |
|a2 |this is having |null|null|null|
|multiline data1|null |null|null|null|
|multiline data2|c2 |d2 |e2 |null|
|a3 |b3 |c3 |d3 |e3 |
|a4 |b4 |c4 |d4 |e4 |
+---------------+---------------+----+----+----+
df2.count() is: 6
Source Files:
If we see the difference in source files. test1.csv has "
at the beginning and end of the multiline data. But test2.csv doesnt have that.
Issue Description: Column B, 2nd row has multiline data. If we see the output of the df2, it has 6 records, here spark is reading it as new record which is not correct
.
The output of the df1 has 4 records, the multiline data in column B 2nd row is treated as one string which is correct
.
Question : Can someone help to fix the code to read the test2.csv file as well correctly.
regex apache-spark pyspark apache-spark-sql
Facing an issue while reading the test2.csv file in pyspark.
Test file test1.csv
a1^b1^c1^d1^e1
a2^"this is having
multiline data1
multiline data2"^c2^d2^e2
a3^b3^c3^d3^e3
a4^b4^c4^d4^e4
Test file test2.csv
a1^b1^c1^d1^e1
a2^this is having
multiline data1
multiline data2^c2^d2^e2
a3^b3^c3^d3^e3
a4^b4^c4^d4^e4
Below is the Code
schema = StructType([
StructField("A", StringType()),
StructField("B", StringType()),
StructField("C", StringType()),
StructField("D", StringType()),
StructField("E", StringType())
])
Creating the dataframe for the above 2 csv files.
df1=spark.read.csv("s3_path/test1.csv",schema=schema,inferSchema=True,multiLine=True,sep='^')
df1.show(10,False)
print ('df1.count() is: ', df1.count())
Below is the output when I read the test1.csv file
+---+-----------------------------------------------+---+---+---+
|A |B |C |D |E |
+---+-----------------------------------------------+---+---+---+
|a1 |b1 |c1 |d1 |e1 |
|a2 |this is having
multiline data1
multiline data2|c2 |d2 |e2 |
|a3 |b3 |c3 |d3 |e3 |
|a4 |b4 |c4 |d4 |e4 |
+---+-----------------------------------------------+---+---+---+
df1.count() is: 4
df2 = spark.read.csv("s3_path/test2.csv",schema=schema,inferSchema=True,multiLine=True,sep='^')
df2.show(10,False)
print ('df2.count() is: ', df2.count())
Below is the output when I read the test2.csv file
+---------------+---------------+----+----+----+
|A |B |C |D |E |
+---------------+---------------+----+----+----+
|a1 |b1 |c1 |d1 |e1 |
|a2 |this is having |null|null|null|
|multiline data1|null |null|null|null|
|multiline data2|c2 |d2 |e2 |null|
|a3 |b3 |c3 |d3 |e3 |
|a4 |b4 |c4 |d4 |e4 |
+---------------+---------------+----+----+----+
df2.count() is: 6
Source Files:
If we see the difference in source files. test1.csv has "
at the beginning and end of the multiline data. But test2.csv doesnt have that.
Issue Description: Column B, 2nd row has multiline data. If we see the output of the df2, it has 6 records, here spark is reading it as new record which is not correct
.
The output of the df1 has 4 records, the multiline data in column B 2nd row is treated as one string which is correct
.
Question : Can someone help to fix the code to read the test2.csv file as well correctly.
regex apache-spark pyspark apache-spark-sql
regex apache-spark pyspark apache-spark-sql
edited Nov 25 at 10:53
asked Nov 20 at 6:00
user10678179
12
12
I guess column qualifier (single or double quotes) is made necessary for a csv file for the purpose of escaping the delimeter itself. But you can give a try for the "quote" option in theread.csv
method.
– Prakash S
Nov 20 at 10:29
@PrakashSoptions quote
parameter is not solving the purpose here, It works like below. Test file:a1^b1^c1^d1^e1 a2^$its a single line ^data$^c2^d2^e2 a3^b3^c3^d3^e3 a4^b4^c4^d4^e4
spark.read.option('quote','$').option("multiLine","true").option("inferSchema", "true").schema(schema).option("sep","^").option("header", "false").csv("s3://aria-preprod.snowflake.stg/pca/volatility/spark_issue/testy.csv").show(10,False)
– user10678179
Nov 21 at 19:07
Output: ` +---+---------------------+---+---+---+ |A |B |C |D |E | +---+---------------------+---+---+---+ |a1 |b1 |c1 |d1 |e1 | |a2 |its a single line ^data|c2 |d2 |e2 | |a3 |b3 |c3 |d3 |e3 | |a4 |b4 |c4 |d4 |e4 | +---+---------------------+---+---+---+ `
– user10678179
Nov 21 at 19:18
add a comment |
I guess column qualifier (single or double quotes) is made necessary for a csv file for the purpose of escaping the delimeter itself. But you can give a try for the "quote" option in theread.csv
method.
– Prakash S
Nov 20 at 10:29
@PrakashSoptions quote
parameter is not solving the purpose here, It works like below. Test file:a1^b1^c1^d1^e1 a2^$its a single line ^data$^c2^d2^e2 a3^b3^c3^d3^e3 a4^b4^c4^d4^e4
spark.read.option('quote','$').option("multiLine","true").option("inferSchema", "true").schema(schema).option("sep","^").option("header", "false").csv("s3://aria-preprod.snowflake.stg/pca/volatility/spark_issue/testy.csv").show(10,False)
– user10678179
Nov 21 at 19:07
Output: ` +---+---------------------+---+---+---+ |A |B |C |D |E | +---+---------------------+---+---+---+ |a1 |b1 |c1 |d1 |e1 | |a2 |its a single line ^data|c2 |d2 |e2 | |a3 |b3 |c3 |d3 |e3 | |a4 |b4 |c4 |d4 |e4 | +---+---------------------+---+---+---+ `
– user10678179
Nov 21 at 19:18
I guess column qualifier (single or double quotes) is made necessary for a csv file for the purpose of escaping the delimeter itself. But you can give a try for the "quote" option in the
read.csv
method.– Prakash S
Nov 20 at 10:29
I guess column qualifier (single or double quotes) is made necessary for a csv file for the purpose of escaping the delimeter itself. But you can give a try for the "quote" option in the
read.csv
method.– Prakash S
Nov 20 at 10:29
@PrakashS
options quote
parameter is not solving the purpose here, It works like below. Test file: a1^b1^c1^d1^e1 a2^$its a single line ^data$^c2^d2^e2 a3^b3^c3^d3^e3 a4^b4^c4^d4^e4
spark.read.option('quote','$').option("multiLine","true").option("inferSchema", "true").schema(schema).option("sep","^").option("header", "false").csv("s3://aria-preprod.snowflake.stg/pca/volatility/spark_issue/testy.csv").show(10,False)
– user10678179
Nov 21 at 19:07
@PrakashS
options quote
parameter is not solving the purpose here, It works like below. Test file: a1^b1^c1^d1^e1 a2^$its a single line ^data$^c2^d2^e2 a3^b3^c3^d3^e3 a4^b4^c4^d4^e4
spark.read.option('quote','$').option("multiLine","true").option("inferSchema", "true").schema(schema).option("sep","^").option("header", "false").csv("s3://aria-preprod.snowflake.stg/pca/volatility/spark_issue/testy.csv").show(10,False)
– user10678179
Nov 21 at 19:07
Output: ` +---+---------------------+---+---+---+ |A |B |C |D |E | +---+---------------------+---+---+---+ |a1 |b1 |c1 |d1 |e1 | |a2 |its a single line ^data|c2 |d2 |e2 | |a3 |b3 |c3 |d3 |e3 | |a4 |b4 |c4 |d4 |e4 | +---+---------------------+---+---+---+ `
– user10678179
Nov 21 at 19:18
Output: ` +---+---------------------+---+---+---+ |A |B |C |D |E | +---+---------------------+---+---+---+ |a1 |b1 |c1 |d1 |e1 | |a2 |its a single line ^data|c2 |d2 |e2 | |a3 |b3 |c3 |d3 |e3 | |a4 |b4 |c4 |d4 |e4 | +---+---------------------+---+---+---+ `
– user10678179
Nov 21 at 19:18
add a comment |
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53387086%2fparsing-the-csv-file-with-multiline-fields-in-pyspark%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53387086%2fparsing-the-csv-file-with-multiline-fields-in-pyspark%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
I guess column qualifier (single or double quotes) is made necessary for a csv file for the purpose of escaping the delimeter itself. But you can give a try for the "quote" option in the
read.csv
method.– Prakash S
Nov 20 at 10:29
@PrakashS
options quote
parameter is not solving the purpose here, It works like below. Test file:a1^b1^c1^d1^e1 a2^$its a single line ^data$^c2^d2^e2 a3^b3^c3^d3^e3 a4^b4^c4^d4^e4
spark.read.option('quote','$').option("multiLine","true").option("inferSchema", "true").schema(schema).option("sep","^").option("header", "false").csv("s3://aria-preprod.snowflake.stg/pca/volatility/spark_issue/testy.csv").show(10,False)
– user10678179
Nov 21 at 19:07
Output: ` +---+---------------------+---+---+---+ |A |B |C |D |E | +---+---------------------+---+---+---+ |a1 |b1 |c1 |d1 |e1 | |a2 |its a single line ^data|c2 |d2 |e2 | |a3 |b3 |c3 |d3 |e3 | |a4 |b4 |c4 |d4 |e4 | +---+---------------------+---+---+---+ `
– user10678179
Nov 21 at 19:18