want to run my spark Job in Hadoop YARN cluster mode, and I am using the following command:

spark-submit --master yarn-cluster 
             --driver-memory 1g 
             --executor-memory 1g
             --executor-cores 1 
               sparkanalitic.jar param1 param2 param3

I am getting error below, kindly suggest whats going wrong, is the command correct or not. I am using CDH 5.3.1.

Diagnostics: Application application_1424284032717_0066 failed 2 times due 
to AM Container for appattempt_1424284032717_0066_000002 exited with  
exitCode: 15 due to: Exception from container-launch.

Container id: container_1424284032717_0066_02_000001
Exit code: 15
Stack trace: ExitCodeException exitCode=15: 
    at org.apache.hadoop.util.Shell.runCommand(
    at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(
    at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(
    at java.util.concurrent.ThreadPoolExecutor.runWorker(
    at java.util.concurrent.ThreadPoolExecutor$

Container exited with a non-zero exit code 15
.Failing this attempt.. Failing the application.
     ApplicationMaster host: N/A
     ApplicationMaster RPC port: -1
     queue: root.hdfs
     start time: 1424699723648
     final status: FAILED
     tracking URL: http://myhostname:8088/cluster/app/application_1424284032717_0066
     user: hdfs

2015-02-23 19:26:04 DEBUG Client - stopping client from cache: org.apache.hadoop.ipc.Client@4085f1ac
2015-02-23 19:26:04 DEBUG Utils - Shutdown hook called
2015-02-23 19:26:05 DEBUG Utils - Shutdown hook called

Any help would be greatly appreciated.




It can mean a lot of things, for us, we get the similar error message because of unsupported Java class version, and we fixed the problem by deleting the referenced Java class in our project.

Use this command to see the detailed error message:

yarn logs -applicationId application_1424284032717_0066

You should remove ".setMaster("local")" in the code.



Other Popular Courses