Is there a way to disable writing the .jhist file for MapReduce?
up vote
0
down vote
favorite
I have a small cluster with a not very good network. Occasionally, a long-running job will get to 100% map & 100% reduce, and then fail.
The problem appears to be this:
At the start of the job, MapReduce opens a DataStreamer to write the .jhist file. Over the course of the job, (the small number of) DataNodes occasionally disconnect and reconnect. When this happens, this DataNode, if currently in the .jhist write pipeline, will be marked as 'bad' (for the .jhist pipeline) and thereafter never be re-considered. A new DataNode will replace it in the pipeline.
However, if eventually every DataNode becomes 'bad', at the end of the job the MRAppMaster/JobHistoryEventHandler will attempt to write to this broken pipeline, and crash (JavaIOException, All datanodes are bad, so on). Things go downhill from here and eventually the job fails despite being completed.
These .jhist files are not important to me, but despite extensive searching I cannot find a way to disable them. Is this possible? Alternatively, is there a way to get DataStreamers to re-try DataNodes previously marked as 'bad'? If neither of these are possible, any other workarounds would be much appreciated.
I am using using Hadoop 3.0.3, upgrading to a version of Hadoop greater than this is an option but not downgrading to a version prior to 3.
hadoop mapreduce hdfs yarn datanode
add a comment |
up vote
0
down vote
favorite
I have a small cluster with a not very good network. Occasionally, a long-running job will get to 100% map & 100% reduce, and then fail.
The problem appears to be this:
At the start of the job, MapReduce opens a DataStreamer to write the .jhist file. Over the course of the job, (the small number of) DataNodes occasionally disconnect and reconnect. When this happens, this DataNode, if currently in the .jhist write pipeline, will be marked as 'bad' (for the .jhist pipeline) and thereafter never be re-considered. A new DataNode will replace it in the pipeline.
However, if eventually every DataNode becomes 'bad', at the end of the job the MRAppMaster/JobHistoryEventHandler will attempt to write to this broken pipeline, and crash (JavaIOException, All datanodes are bad, so on). Things go downhill from here and eventually the job fails despite being completed.
These .jhist files are not important to me, but despite extensive searching I cannot find a way to disable them. Is this possible? Alternatively, is there a way to get DataStreamers to re-try DataNodes previously marked as 'bad'? If neither of these are possible, any other workarounds would be much appreciated.
I am using using Hadoop 3.0.3, upgrading to a version of Hadoop greater than this is an option but not downgrading to a version prior to 3.
hadoop mapreduce hdfs yarn datanode
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I have a small cluster with a not very good network. Occasionally, a long-running job will get to 100% map & 100% reduce, and then fail.
The problem appears to be this:
At the start of the job, MapReduce opens a DataStreamer to write the .jhist file. Over the course of the job, (the small number of) DataNodes occasionally disconnect and reconnect. When this happens, this DataNode, if currently in the .jhist write pipeline, will be marked as 'bad' (for the .jhist pipeline) and thereafter never be re-considered. A new DataNode will replace it in the pipeline.
However, if eventually every DataNode becomes 'bad', at the end of the job the MRAppMaster/JobHistoryEventHandler will attempt to write to this broken pipeline, and crash (JavaIOException, All datanodes are bad, so on). Things go downhill from here and eventually the job fails despite being completed.
These .jhist files are not important to me, but despite extensive searching I cannot find a way to disable them. Is this possible? Alternatively, is there a way to get DataStreamers to re-try DataNodes previously marked as 'bad'? If neither of these are possible, any other workarounds would be much appreciated.
I am using using Hadoop 3.0.3, upgrading to a version of Hadoop greater than this is an option but not downgrading to a version prior to 3.
hadoop mapreduce hdfs yarn datanode
I have a small cluster with a not very good network. Occasionally, a long-running job will get to 100% map & 100% reduce, and then fail.
The problem appears to be this:
At the start of the job, MapReduce opens a DataStreamer to write the .jhist file. Over the course of the job, (the small number of) DataNodes occasionally disconnect and reconnect. When this happens, this DataNode, if currently in the .jhist write pipeline, will be marked as 'bad' (for the .jhist pipeline) and thereafter never be re-considered. A new DataNode will replace it in the pipeline.
However, if eventually every DataNode becomes 'bad', at the end of the job the MRAppMaster/JobHistoryEventHandler will attempt to write to this broken pipeline, and crash (JavaIOException, All datanodes are bad, so on). Things go downhill from here and eventually the job fails despite being completed.
These .jhist files are not important to me, but despite extensive searching I cannot find a way to disable them. Is this possible? Alternatively, is there a way to get DataStreamers to re-try DataNodes previously marked as 'bad'? If neither of these are possible, any other workarounds would be much appreciated.
I am using using Hadoop 3.0.3, upgrading to a version of Hadoop greater than this is an option but not downgrading to a version prior to 3.
hadoop mapreduce hdfs yarn datanode
hadoop mapreduce hdfs yarn datanode
edited Nov 18 at 19:41
asked Nov 18 at 19:20
Jahwffrey
64
64
add a comment |
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53364594%2fis-there-a-way-to-disable-writing-the-jhist-file-for-mapreduce%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown