Configure Zeppelin's Spark Interpreter on EMR when starting a cluster
I am creating clusters on EMR and configure Zeppelin to read the notebooks from S3. To do that I am using a json object that looks like that:
[
{
"Classification": "zeppelin-env",
"Properties": {
},
"Configurations": [
{
"Classification": "export",
"Properties": {
"ZEPPELIN_NOTEBOOK_STORAGE":"org.apache.zeppelin.notebook.repo.S3NotebookRepo",
"ZEPPELIN_NOTEBOOK_S3_BUCKET":"hs-zeppelin-notebooks",
"ZEPPELIN_NOTEBOOK_USER":"user"
},
"Configurations": [
]
}
]
}
]
I am pasting this object in the Stoftware configuration page of EMR:
My question is, how/where I can configure the Spark interpreter directly without the need to manually configure it from Zeppelin each time I start a cluster?
apache-spark emr amazon-emr apache-zeppelin
add a comment |
I am creating clusters on EMR and configure Zeppelin to read the notebooks from S3. To do that I am using a json object that looks like that:
[
{
"Classification": "zeppelin-env",
"Properties": {
},
"Configurations": [
{
"Classification": "export",
"Properties": {
"ZEPPELIN_NOTEBOOK_STORAGE":"org.apache.zeppelin.notebook.repo.S3NotebookRepo",
"ZEPPELIN_NOTEBOOK_S3_BUCKET":"hs-zeppelin-notebooks",
"ZEPPELIN_NOTEBOOK_USER":"user"
},
"Configurations": [
]
}
]
}
]
I am pasting this object in the Stoftware configuration page of EMR:
My question is, how/where I can configure the Spark interpreter directly without the need to manually configure it from Zeppelin each time I start a cluster?
apache-spark emr amazon-emr apache-zeppelin
add a comment |
I am creating clusters on EMR and configure Zeppelin to read the notebooks from S3. To do that I am using a json object that looks like that:
[
{
"Classification": "zeppelin-env",
"Properties": {
},
"Configurations": [
{
"Classification": "export",
"Properties": {
"ZEPPELIN_NOTEBOOK_STORAGE":"org.apache.zeppelin.notebook.repo.S3NotebookRepo",
"ZEPPELIN_NOTEBOOK_S3_BUCKET":"hs-zeppelin-notebooks",
"ZEPPELIN_NOTEBOOK_USER":"user"
},
"Configurations": [
]
}
]
}
]
I am pasting this object in the Stoftware configuration page of EMR:
My question is, how/where I can configure the Spark interpreter directly without the need to manually configure it from Zeppelin each time I start a cluster?
apache-spark emr amazon-emr apache-zeppelin
I am creating clusters on EMR and configure Zeppelin to read the notebooks from S3. To do that I am using a json object that looks like that:
[
{
"Classification": "zeppelin-env",
"Properties": {
},
"Configurations": [
{
"Classification": "export",
"Properties": {
"ZEPPELIN_NOTEBOOK_STORAGE":"org.apache.zeppelin.notebook.repo.S3NotebookRepo",
"ZEPPELIN_NOTEBOOK_S3_BUCKET":"hs-zeppelin-notebooks",
"ZEPPELIN_NOTEBOOK_USER":"user"
},
"Configurations": [
]
}
]
}
]
I am pasting this object in the Stoftware configuration page of EMR:
My question is, how/where I can configure the Spark interpreter directly without the need to manually configure it from Zeppelin each time I start a cluster?
apache-spark emr amazon-emr apache-zeppelin
apache-spark emr amazon-emr apache-zeppelin
asked Jul 26 '17 at 13:39
RamiRami
2,924114683
2,924114683
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
This is a bit involved, you will need to do 2 things:
- Edit the interpreter.json of Zeppelin
- Restart the interpreter
So what you need to do is write a shell script and then add an extra step to the EMR cluster configuration that runs this shell script.
The Zeppelin configuration is in json, you can use jq (a tool) to manipulate json. I don't know what you want to change exactly, but here is an example that adds the (mysteriously missing) DepInterpreter:
#!/bin/bash
# 1 edit the Spark interpreter
set -e
cat /etc/zeppelin/conf/interpreter.json | jq '.interpreterSettings."2ANGGHHMQ".interpreterGroup |= .+ [{"class":"org.apache.zeppelin.spark.DepInterpreter", "name":"dep"}]' | sudo -u zeppelin tee /etc/zeppelin/conf/interpreter.json
# Trigger restart of Spark interpreter
curl -X PUT http://localhost:8890/api/interpreter/setting/restart/2ANGGHHMQ
Put this shell script in a s3 bucket.
Then start your EMR cluster with
--steps Type=CUSTOM_JAR,Name=CustomJAR,ActionOnFailure=CONTINUE,Jar=s3://eu-west-1.elasticmapreduce/libs/script-runner/script-runner.jar,Args=[s3://mybucket/script.sh]
Great thanks @rdeboo. Can you please elaborate more on what is "2ANGGHHMQ". And can you please provide an example of setting "spark.yarn.executor.memoryOverhead" to 2048 which is my case along with spark.executor.memory and spark.executor.cores
– Rami
Aug 8 '17 at 9:49
1
@Rami it's some internal key name that identifies the relevant section in interpreter.json. It seems stable (I've looked at many instanced in EMR with different versions). But there are of course no guarantees that this will not change. In any case, I think AWS should just fix the default configuration so we can all stop using this workaround.
– rdeboo
Aug 14 '17 at 14:18
this is great work! BUT it needed a critical adjustment in my case. restarting the interpreter using the rest API doesn't seem to pick any changes in interpreter.json. Zeppelin itself needs to be restarted, at least this happens on EMR. So instead of curl it worked with: sudo /usr/lib/zeppelin/bin/zeppelin-daemon.sh restart
– Radu Simionescu
Jan 5 '18 at 19:21
2
turns out "sudo /usr/lib/zeppelin/bin/zeppelin-daemon.sh restart" on EMR is problematic, sometimes. the recommended way is doing "sudo stop zeppelin" and then "sudo start zeppelin"
– Radu Simionescu
Jan 7 '18 at 1:53
add a comment |
I suggest use Terraform to create your cluster
there is a command :
configurations_json = "${file("config.json")}"
that can let you inject a json file as a configuration file for your emr cluster
https://www.terraform.io/docs/providers/aws/r/emr_cluster.html
regards
Misses the question:My question is, how/where I can configure the Spark interpreter directly without the need to manually configure it from Zeppelin each time I start a cluster?
– 9bO3av5fw5
Nov 27 '18 at 18:12
and the answer is writ your configurations into a json file and add into the terraform option, i 'm having the same problem and i create a template to configure all configurations (spark, hive, zeppeling, etc)
– Julio
Nov 28 '18 at 15:45
and what do you write in config.json that alters the contents of/etc/zeppelin/conf/interpreter.json
– 9bO3av5fw5
Dec 3 '18 at 11:31
add a comment |
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f45328671%2fconfigure-zeppelins-spark-interpreter-on-emr-when-starting-a-cluster%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
This is a bit involved, you will need to do 2 things:
- Edit the interpreter.json of Zeppelin
- Restart the interpreter
So what you need to do is write a shell script and then add an extra step to the EMR cluster configuration that runs this shell script.
The Zeppelin configuration is in json, you can use jq (a tool) to manipulate json. I don't know what you want to change exactly, but here is an example that adds the (mysteriously missing) DepInterpreter:
#!/bin/bash
# 1 edit the Spark interpreter
set -e
cat /etc/zeppelin/conf/interpreter.json | jq '.interpreterSettings."2ANGGHHMQ".interpreterGroup |= .+ [{"class":"org.apache.zeppelin.spark.DepInterpreter", "name":"dep"}]' | sudo -u zeppelin tee /etc/zeppelin/conf/interpreter.json
# Trigger restart of Spark interpreter
curl -X PUT http://localhost:8890/api/interpreter/setting/restart/2ANGGHHMQ
Put this shell script in a s3 bucket.
Then start your EMR cluster with
--steps Type=CUSTOM_JAR,Name=CustomJAR,ActionOnFailure=CONTINUE,Jar=s3://eu-west-1.elasticmapreduce/libs/script-runner/script-runner.jar,Args=[s3://mybucket/script.sh]
Great thanks @rdeboo. Can you please elaborate more on what is "2ANGGHHMQ". And can you please provide an example of setting "spark.yarn.executor.memoryOverhead" to 2048 which is my case along with spark.executor.memory and spark.executor.cores
– Rami
Aug 8 '17 at 9:49
1
@Rami it's some internal key name that identifies the relevant section in interpreter.json. It seems stable (I've looked at many instanced in EMR with different versions). But there are of course no guarantees that this will not change. In any case, I think AWS should just fix the default configuration so we can all stop using this workaround.
– rdeboo
Aug 14 '17 at 14:18
this is great work! BUT it needed a critical adjustment in my case. restarting the interpreter using the rest API doesn't seem to pick any changes in interpreter.json. Zeppelin itself needs to be restarted, at least this happens on EMR. So instead of curl it worked with: sudo /usr/lib/zeppelin/bin/zeppelin-daemon.sh restart
– Radu Simionescu
Jan 5 '18 at 19:21
2
turns out "sudo /usr/lib/zeppelin/bin/zeppelin-daemon.sh restart" on EMR is problematic, sometimes. the recommended way is doing "sudo stop zeppelin" and then "sudo start zeppelin"
– Radu Simionescu
Jan 7 '18 at 1:53
add a comment |
This is a bit involved, you will need to do 2 things:
- Edit the interpreter.json of Zeppelin
- Restart the interpreter
So what you need to do is write a shell script and then add an extra step to the EMR cluster configuration that runs this shell script.
The Zeppelin configuration is in json, you can use jq (a tool) to manipulate json. I don't know what you want to change exactly, but here is an example that adds the (mysteriously missing) DepInterpreter:
#!/bin/bash
# 1 edit the Spark interpreter
set -e
cat /etc/zeppelin/conf/interpreter.json | jq '.interpreterSettings."2ANGGHHMQ".interpreterGroup |= .+ [{"class":"org.apache.zeppelin.spark.DepInterpreter", "name":"dep"}]' | sudo -u zeppelin tee /etc/zeppelin/conf/interpreter.json
# Trigger restart of Spark interpreter
curl -X PUT http://localhost:8890/api/interpreter/setting/restart/2ANGGHHMQ
Put this shell script in a s3 bucket.
Then start your EMR cluster with
--steps Type=CUSTOM_JAR,Name=CustomJAR,ActionOnFailure=CONTINUE,Jar=s3://eu-west-1.elasticmapreduce/libs/script-runner/script-runner.jar,Args=[s3://mybucket/script.sh]
Great thanks @rdeboo. Can you please elaborate more on what is "2ANGGHHMQ". And can you please provide an example of setting "spark.yarn.executor.memoryOverhead" to 2048 which is my case along with spark.executor.memory and spark.executor.cores
– Rami
Aug 8 '17 at 9:49
1
@Rami it's some internal key name that identifies the relevant section in interpreter.json. It seems stable (I've looked at many instanced in EMR with different versions). But there are of course no guarantees that this will not change. In any case, I think AWS should just fix the default configuration so we can all stop using this workaround.
– rdeboo
Aug 14 '17 at 14:18
this is great work! BUT it needed a critical adjustment in my case. restarting the interpreter using the rest API doesn't seem to pick any changes in interpreter.json. Zeppelin itself needs to be restarted, at least this happens on EMR. So instead of curl it worked with: sudo /usr/lib/zeppelin/bin/zeppelin-daemon.sh restart
– Radu Simionescu
Jan 5 '18 at 19:21
2
turns out "sudo /usr/lib/zeppelin/bin/zeppelin-daemon.sh restart" on EMR is problematic, sometimes. the recommended way is doing "sudo stop zeppelin" and then "sudo start zeppelin"
– Radu Simionescu
Jan 7 '18 at 1:53
add a comment |
This is a bit involved, you will need to do 2 things:
- Edit the interpreter.json of Zeppelin
- Restart the interpreter
So what you need to do is write a shell script and then add an extra step to the EMR cluster configuration that runs this shell script.
The Zeppelin configuration is in json, you can use jq (a tool) to manipulate json. I don't know what you want to change exactly, but here is an example that adds the (mysteriously missing) DepInterpreter:
#!/bin/bash
# 1 edit the Spark interpreter
set -e
cat /etc/zeppelin/conf/interpreter.json | jq '.interpreterSettings."2ANGGHHMQ".interpreterGroup |= .+ [{"class":"org.apache.zeppelin.spark.DepInterpreter", "name":"dep"}]' | sudo -u zeppelin tee /etc/zeppelin/conf/interpreter.json
# Trigger restart of Spark interpreter
curl -X PUT http://localhost:8890/api/interpreter/setting/restart/2ANGGHHMQ
Put this shell script in a s3 bucket.
Then start your EMR cluster with
--steps Type=CUSTOM_JAR,Name=CustomJAR,ActionOnFailure=CONTINUE,Jar=s3://eu-west-1.elasticmapreduce/libs/script-runner/script-runner.jar,Args=[s3://mybucket/script.sh]
This is a bit involved, you will need to do 2 things:
- Edit the interpreter.json of Zeppelin
- Restart the interpreter
So what you need to do is write a shell script and then add an extra step to the EMR cluster configuration that runs this shell script.
The Zeppelin configuration is in json, you can use jq (a tool) to manipulate json. I don't know what you want to change exactly, but here is an example that adds the (mysteriously missing) DepInterpreter:
#!/bin/bash
# 1 edit the Spark interpreter
set -e
cat /etc/zeppelin/conf/interpreter.json | jq '.interpreterSettings."2ANGGHHMQ".interpreterGroup |= .+ [{"class":"org.apache.zeppelin.spark.DepInterpreter", "name":"dep"}]' | sudo -u zeppelin tee /etc/zeppelin/conf/interpreter.json
# Trigger restart of Spark interpreter
curl -X PUT http://localhost:8890/api/interpreter/setting/restart/2ANGGHHMQ
Put this shell script in a s3 bucket.
Then start your EMR cluster with
--steps Type=CUSTOM_JAR,Name=CustomJAR,ActionOnFailure=CONTINUE,Jar=s3://eu-west-1.elasticmapreduce/libs/script-runner/script-runner.jar,Args=[s3://mybucket/script.sh]
answered Jul 26 '17 at 14:20
rdeboordeboo
23417
23417
Great thanks @rdeboo. Can you please elaborate more on what is "2ANGGHHMQ". And can you please provide an example of setting "spark.yarn.executor.memoryOverhead" to 2048 which is my case along with spark.executor.memory and spark.executor.cores
– Rami
Aug 8 '17 at 9:49
1
@Rami it's some internal key name that identifies the relevant section in interpreter.json. It seems stable (I've looked at many instanced in EMR with different versions). But there are of course no guarantees that this will not change. In any case, I think AWS should just fix the default configuration so we can all stop using this workaround.
– rdeboo
Aug 14 '17 at 14:18
this is great work! BUT it needed a critical adjustment in my case. restarting the interpreter using the rest API doesn't seem to pick any changes in interpreter.json. Zeppelin itself needs to be restarted, at least this happens on EMR. So instead of curl it worked with: sudo /usr/lib/zeppelin/bin/zeppelin-daemon.sh restart
– Radu Simionescu
Jan 5 '18 at 19:21
2
turns out "sudo /usr/lib/zeppelin/bin/zeppelin-daemon.sh restart" on EMR is problematic, sometimes. the recommended way is doing "sudo stop zeppelin" and then "sudo start zeppelin"
– Radu Simionescu
Jan 7 '18 at 1:53
add a comment |
Great thanks @rdeboo. Can you please elaborate more on what is "2ANGGHHMQ". And can you please provide an example of setting "spark.yarn.executor.memoryOverhead" to 2048 which is my case along with spark.executor.memory and spark.executor.cores
– Rami
Aug 8 '17 at 9:49
1
@Rami it's some internal key name that identifies the relevant section in interpreter.json. It seems stable (I've looked at many instanced in EMR with different versions). But there are of course no guarantees that this will not change. In any case, I think AWS should just fix the default configuration so we can all stop using this workaround.
– rdeboo
Aug 14 '17 at 14:18
this is great work! BUT it needed a critical adjustment in my case. restarting the interpreter using the rest API doesn't seem to pick any changes in interpreter.json. Zeppelin itself needs to be restarted, at least this happens on EMR. So instead of curl it worked with: sudo /usr/lib/zeppelin/bin/zeppelin-daemon.sh restart
– Radu Simionescu
Jan 5 '18 at 19:21
2
turns out "sudo /usr/lib/zeppelin/bin/zeppelin-daemon.sh restart" on EMR is problematic, sometimes. the recommended way is doing "sudo stop zeppelin" and then "sudo start zeppelin"
– Radu Simionescu
Jan 7 '18 at 1:53
Great thanks @rdeboo. Can you please elaborate more on what is "2ANGGHHMQ". And can you please provide an example of setting "spark.yarn.executor.memoryOverhead" to 2048 which is my case along with spark.executor.memory and spark.executor.cores
– Rami
Aug 8 '17 at 9:49
Great thanks @rdeboo. Can you please elaborate more on what is "2ANGGHHMQ". And can you please provide an example of setting "spark.yarn.executor.memoryOverhead" to 2048 which is my case along with spark.executor.memory and spark.executor.cores
– Rami
Aug 8 '17 at 9:49
1
1
@Rami it's some internal key name that identifies the relevant section in interpreter.json. It seems stable (I've looked at many instanced in EMR with different versions). But there are of course no guarantees that this will not change. In any case, I think AWS should just fix the default configuration so we can all stop using this workaround.
– rdeboo
Aug 14 '17 at 14:18
@Rami it's some internal key name that identifies the relevant section in interpreter.json. It seems stable (I've looked at many instanced in EMR with different versions). But there are of course no guarantees that this will not change. In any case, I think AWS should just fix the default configuration so we can all stop using this workaround.
– rdeboo
Aug 14 '17 at 14:18
this is great work! BUT it needed a critical adjustment in my case. restarting the interpreter using the rest API doesn't seem to pick any changes in interpreter.json. Zeppelin itself needs to be restarted, at least this happens on EMR. So instead of curl it worked with: sudo /usr/lib/zeppelin/bin/zeppelin-daemon.sh restart
– Radu Simionescu
Jan 5 '18 at 19:21
this is great work! BUT it needed a critical adjustment in my case. restarting the interpreter using the rest API doesn't seem to pick any changes in interpreter.json. Zeppelin itself needs to be restarted, at least this happens on EMR. So instead of curl it worked with: sudo /usr/lib/zeppelin/bin/zeppelin-daemon.sh restart
– Radu Simionescu
Jan 5 '18 at 19:21
2
2
turns out "sudo /usr/lib/zeppelin/bin/zeppelin-daemon.sh restart" on EMR is problematic, sometimes. the recommended way is doing "sudo stop zeppelin" and then "sudo start zeppelin"
– Radu Simionescu
Jan 7 '18 at 1:53
turns out "sudo /usr/lib/zeppelin/bin/zeppelin-daemon.sh restart" on EMR is problematic, sometimes. the recommended way is doing "sudo stop zeppelin" and then "sudo start zeppelin"
– Radu Simionescu
Jan 7 '18 at 1:53
add a comment |
I suggest use Terraform to create your cluster
there is a command :
configurations_json = "${file("config.json")}"
that can let you inject a json file as a configuration file for your emr cluster
https://www.terraform.io/docs/providers/aws/r/emr_cluster.html
regards
Misses the question:My question is, how/where I can configure the Spark interpreter directly without the need to manually configure it from Zeppelin each time I start a cluster?
– 9bO3av5fw5
Nov 27 '18 at 18:12
and the answer is writ your configurations into a json file and add into the terraform option, i 'm having the same problem and i create a template to configure all configurations (spark, hive, zeppeling, etc)
– Julio
Nov 28 '18 at 15:45
and what do you write in config.json that alters the contents of/etc/zeppelin/conf/interpreter.json
– 9bO3av5fw5
Dec 3 '18 at 11:31
add a comment |
I suggest use Terraform to create your cluster
there is a command :
configurations_json = "${file("config.json")}"
that can let you inject a json file as a configuration file for your emr cluster
https://www.terraform.io/docs/providers/aws/r/emr_cluster.html
regards
Misses the question:My question is, how/where I can configure the Spark interpreter directly without the need to manually configure it from Zeppelin each time I start a cluster?
– 9bO3av5fw5
Nov 27 '18 at 18:12
and the answer is writ your configurations into a json file and add into the terraform option, i 'm having the same problem and i create a template to configure all configurations (spark, hive, zeppeling, etc)
– Julio
Nov 28 '18 at 15:45
and what do you write in config.json that alters the contents of/etc/zeppelin/conf/interpreter.json
– 9bO3av5fw5
Dec 3 '18 at 11:31
add a comment |
I suggest use Terraform to create your cluster
there is a command :
configurations_json = "${file("config.json")}"
that can let you inject a json file as a configuration file for your emr cluster
https://www.terraform.io/docs/providers/aws/r/emr_cluster.html
regards
I suggest use Terraform to create your cluster
there is a command :
configurations_json = "${file("config.json")}"
that can let you inject a json file as a configuration file for your emr cluster
https://www.terraform.io/docs/providers/aws/r/emr_cluster.html
regards
answered Nov 23 '18 at 9:36
JulioJulio
196
196
Misses the question:My question is, how/where I can configure the Spark interpreter directly without the need to manually configure it from Zeppelin each time I start a cluster?
– 9bO3av5fw5
Nov 27 '18 at 18:12
and the answer is writ your configurations into a json file and add into the terraform option, i 'm having the same problem and i create a template to configure all configurations (spark, hive, zeppeling, etc)
– Julio
Nov 28 '18 at 15:45
and what do you write in config.json that alters the contents of/etc/zeppelin/conf/interpreter.json
– 9bO3av5fw5
Dec 3 '18 at 11:31
add a comment |
Misses the question:My question is, how/where I can configure the Spark interpreter directly without the need to manually configure it from Zeppelin each time I start a cluster?
– 9bO3av5fw5
Nov 27 '18 at 18:12
and the answer is writ your configurations into a json file and add into the terraform option, i 'm having the same problem and i create a template to configure all configurations (spark, hive, zeppeling, etc)
– Julio
Nov 28 '18 at 15:45
and what do you write in config.json that alters the contents of/etc/zeppelin/conf/interpreter.json
– 9bO3av5fw5
Dec 3 '18 at 11:31
Misses the question:
My question is, how/where I can configure the Spark interpreter directly without the need to manually configure it from Zeppelin each time I start a cluster?
– 9bO3av5fw5
Nov 27 '18 at 18:12
Misses the question:
My question is, how/where I can configure the Spark interpreter directly without the need to manually configure it from Zeppelin each time I start a cluster?
– 9bO3av5fw5
Nov 27 '18 at 18:12
and the answer is writ your configurations into a json file and add into the terraform option, i 'm having the same problem and i create a template to configure all configurations (spark, hive, zeppeling, etc)
– Julio
Nov 28 '18 at 15:45
and the answer is writ your configurations into a json file and add into the terraform option, i 'm having the same problem and i create a template to configure all configurations (spark, hive, zeppeling, etc)
– Julio
Nov 28 '18 at 15:45
and what do you write in config.json that alters the contents of
/etc/zeppelin/conf/interpreter.json
– 9bO3av5fw5
Dec 3 '18 at 11:31
and what do you write in config.json that alters the contents of
/etc/zeppelin/conf/interpreter.json
– 9bO3av5fw5
Dec 3 '18 at 11:31
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f45328671%2fconfigure-zeppelins-spark-interpreter-on-emr-when-starting-a-cluster%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown