java.io.StreamCorruptedException when importing a CSV to a Spark DataFrame
up vote
0
down vote
favorite
I'm running a Spark cluster in standalone
mode. Both Master and Worker nodes are reachable, with logs in the Spark Web UI.
I'm trying to load data into a PySpark session so I can work on Spark DataFrames.
Following several examples (among them, one from the official doc), I tried using different methods, all failing with the same error. Eg
from pyspark.conf import SparkConf
from pyspark.context import SparkContext
from pyspark.sql import SQLContext
conf = SparkConf().setAppName('NAME').setMaster('spark://HOST:7077')
sc = SparkContext(conf=conf)
spark = SparkSession.builder.getOrCreate()
# a try
df = spark.read.load('/path/to/file.csv', format='csv', sep=',', header=True)
# another try
sql_ctx = SQLContext(sc)
df = sql_ctx.read.csv('/path/to/file.csv', header=True)
# and a few other tries...
Every time, I get the same error:
Py4JJavaError: An error occurred while calling o81.csv. :
org.apache.spark.SparkException: Job aborted due to stage failure:
Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3
in stage 0.0 (TID 3, 192.168.X.X, executor 0):
java.io.StreamCorruptedException: invalid stream header: 0000000B
I'm loading data from JSON and CSV (tweaking the methods calls appropriately of course), the error is the same for both, every time.
Does someone understand what is the problem?
apache-spark pyspark pyspark-sql
add a comment |
up vote
0
down vote
favorite
I'm running a Spark cluster in standalone
mode. Both Master and Worker nodes are reachable, with logs in the Spark Web UI.
I'm trying to load data into a PySpark session so I can work on Spark DataFrames.
Following several examples (among them, one from the official doc), I tried using different methods, all failing with the same error. Eg
from pyspark.conf import SparkConf
from pyspark.context import SparkContext
from pyspark.sql import SQLContext
conf = SparkConf().setAppName('NAME').setMaster('spark://HOST:7077')
sc = SparkContext(conf=conf)
spark = SparkSession.builder.getOrCreate()
# a try
df = spark.read.load('/path/to/file.csv', format='csv', sep=',', header=True)
# another try
sql_ctx = SQLContext(sc)
df = sql_ctx.read.csv('/path/to/file.csv', header=True)
# and a few other tries...
Every time, I get the same error:
Py4JJavaError: An error occurred while calling o81.csv. :
org.apache.spark.SparkException: Job aborted due to stage failure:
Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3
in stage 0.0 (TID 3, 192.168.X.X, executor 0):
java.io.StreamCorruptedException: invalid stream header: 0000000B
I'm loading data from JSON and CSV (tweaking the methods calls appropriately of course), the error is the same for both, every time.
Does someone understand what is the problem?
apache-spark pyspark pyspark-sql
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I'm running a Spark cluster in standalone
mode. Both Master and Worker nodes are reachable, with logs in the Spark Web UI.
I'm trying to load data into a PySpark session so I can work on Spark DataFrames.
Following several examples (among them, one from the official doc), I tried using different methods, all failing with the same error. Eg
from pyspark.conf import SparkConf
from pyspark.context import SparkContext
from pyspark.sql import SQLContext
conf = SparkConf().setAppName('NAME').setMaster('spark://HOST:7077')
sc = SparkContext(conf=conf)
spark = SparkSession.builder.getOrCreate()
# a try
df = spark.read.load('/path/to/file.csv', format='csv', sep=',', header=True)
# another try
sql_ctx = SQLContext(sc)
df = sql_ctx.read.csv('/path/to/file.csv', header=True)
# and a few other tries...
Every time, I get the same error:
Py4JJavaError: An error occurred while calling o81.csv. :
org.apache.spark.SparkException: Job aborted due to stage failure:
Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3
in stage 0.0 (TID 3, 192.168.X.X, executor 0):
java.io.StreamCorruptedException: invalid stream header: 0000000B
I'm loading data from JSON and CSV (tweaking the methods calls appropriately of course), the error is the same for both, every time.
Does someone understand what is the problem?
apache-spark pyspark pyspark-sql
I'm running a Spark cluster in standalone
mode. Both Master and Worker nodes are reachable, with logs in the Spark Web UI.
I'm trying to load data into a PySpark session so I can work on Spark DataFrames.
Following several examples (among them, one from the official doc), I tried using different methods, all failing with the same error. Eg
from pyspark.conf import SparkConf
from pyspark.context import SparkContext
from pyspark.sql import SQLContext
conf = SparkConf().setAppName('NAME').setMaster('spark://HOST:7077')
sc = SparkContext(conf=conf)
spark = SparkSession.builder.getOrCreate()
# a try
df = spark.read.load('/path/to/file.csv', format='csv', sep=',', header=True)
# another try
sql_ctx = SQLContext(sc)
df = sql_ctx.read.csv('/path/to/file.csv', header=True)
# and a few other tries...
Every time, I get the same error:
Py4JJavaError: An error occurred while calling o81.csv. :
org.apache.spark.SparkException: Job aborted due to stage failure:
Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3
in stage 0.0 (TID 3, 192.168.X.X, executor 0):
java.io.StreamCorruptedException: invalid stream header: 0000000B
I'm loading data from JSON and CSV (tweaking the methods calls appropriately of course), the error is the same for both, every time.
Does someone understand what is the problem?
apache-spark pyspark pyspark-sql
apache-spark pyspark pyspark-sql
asked Nov 13 at 17:01
edouardtheron
10618
10618
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
0
down vote
accepted
To whom it may concern, I finally figured out the problem thank to this response.
pyspark
version for the SparkSession
did not match Spark application version (2.4 VS 2.3).
Re-installing pyspark
under version 2.3 solved instantly the issues. #facepalm
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
0
down vote
accepted
To whom it may concern, I finally figured out the problem thank to this response.
pyspark
version for the SparkSession
did not match Spark application version (2.4 VS 2.3).
Re-installing pyspark
under version 2.3 solved instantly the issues. #facepalm
add a comment |
up vote
0
down vote
accepted
To whom it may concern, I finally figured out the problem thank to this response.
pyspark
version for the SparkSession
did not match Spark application version (2.4 VS 2.3).
Re-installing pyspark
under version 2.3 solved instantly the issues. #facepalm
add a comment |
up vote
0
down vote
accepted
up vote
0
down vote
accepted
To whom it may concern, I finally figured out the problem thank to this response.
pyspark
version for the SparkSession
did not match Spark application version (2.4 VS 2.3).
Re-installing pyspark
under version 2.3 solved instantly the issues. #facepalm
To whom it may concern, I finally figured out the problem thank to this response.
pyspark
version for the SparkSession
did not match Spark application version (2.4 VS 2.3).
Re-installing pyspark
under version 2.3 solved instantly the issues. #facepalm
edited Nov 18 at 20:07
answered Nov 18 at 16:52
edouardtheron
10618
10618
add a comment |
add a comment |
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53286071%2fjava-io-streamcorruptedexception-when-importing-a-csv-to-a-spark-dataframe%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown