最近在实践sparkspark shell 集群模式式,不太理解spark-shell和sbt打包独立应用程序的区别。

后使用快捷导航没有帐号?
查看: 22118|回复: 15
spark测试spark-submit提交作业遇到的报错
中级会员, 积分 487, 距离下一级还需 13 积分
论坛徽章:3
1问题描述:
部署完spark后,测试第一周ppt示例
测试过程:spark-shelll测试正常完成;但是 测试spark-submit 示例遇到报错
14/07/11 19:23:35 WARN TaskSchedulerImpl: Initial job has not ac check your cluster UI to ensure that workers are registered and have sufficient memory
2环境简介:
硬件配置(单机):
软件配置:
hadoop-2.3.0-cdh5.0.0 伪分布式
spark-1.0.1-bin-2.3.0 伪分布式
3报错日志如下:
[spark1@hadoop108 spark-1.0.1-bin-2.3.0]$ jps
373 SecondaryNameNode
32645 DataNode
660 NodeManager
32544 NameNode
555 ResourceManager
[spark1@hadoop108 spark-1.0.1-bin-2.3.0]$
[spark1@hadoop108 spark-1.0.1-bin-2.3.0]$ sbin/start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /home/spark1/app/spark-1.0.1-bin-2.3.0/sbin/../logs/spark-spark1-org.apache.spark.deploy.master.Master-1-hadoop108.out
localhost: starting org.apache.spark.deploy.worker.Worker, logging to /home/spark1/app/spark-1.0.1-bin-2.3.0/sbin/../logs/spark-spark1-org.apache.spark.deploy.worker.Worker-1-hadoop108.out
[spark1@hadoop108 spark-1.0.1-bin-2.3.0]$ jps
373 SecondaryNameNode
3022 Master
3179 Worker
32645 DataNode
660 NodeManager
32544 NameNode
555 ResourceManager
[spark1@hadoop108 spark-1.0.1-bin-2.3.0]$
[spark1@hadoop108 spark-1.0.1-bin-2.3.0]$ bin/spark-submit --master spark://hadoop108:7077 --class org.apache.spark.examples.SparkPi --executor-memory 300m lib/spark-examples-1.0.1-hadoop2.3.0.jar 1000
Spark assembly has been built with Hive, including Datanucleus jars on classpath
14/07/11 19:23:17 INFO SecurityManager: Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
14/07/11 19:23:17 INFO SecurityManager: Changing view acls to: spark1
14/07/11 19:23:17 INFO SecurityManager: SecurityManager: aut users with view permissions: Set(spark1)
14/07/11 19:23:18 INFO Slf4jLogger: Slf4jLogger started
14/07/11 19:23:18 INFO Remoting: Starting remoting
14/07/11 19:23:18 INFO Remoting: R listening on addresses :[akka.tcp://spark@hadoop108:58016]
14/07/11 19:23:18 INFO Remoting: Remoting now listens on addresses: [akka.tcp://spark@hadoop108:58016]
14/07/11 19:23:18 INFO SparkEnv: Registering MapOutputTracker
14/07/11 19:23:18 INFO SparkEnv: Registering BlockManagerMaster
14/07/11 19:23:18 INFO DiskBlockManager: Created local directory at /tmp/spark-local-18-5313
14/07/11 19:23:18 INFO MemoryStore: MemoryStore started with capacity 294.9 MB.
14/07/11 19:23:18 INFO ConnectionManager: Bound socket to port 54436 with id = ConnectionManagerId(hadoop108,54436)
14/07/11 19:23:18 INFO BlockManagerMaster: Trying to register BlockManager
14/07/11 19:23:18 INFO BlockManagerInfo: Registering block manager hadoop108:54436 with 294.9 MB RAM
14/07/11 19:23:18 INFO BlockManagerMaster: Registered BlockManager
14/07/11 19:23:18 INFO HttpServer: Starting HTTP Server
14/07/11 19:23:18 INFO HttpBroadcast: Broadcast server started at
14/07/11 19:23:18 INFO HttpFileServer: HTTP File server directory is /tmp/spark-d0bd730e-2d6e-40c8-af20-982c4e832730
14/07/11 19:23:18 INFO HttpServer: Starting HTTP Server
14/07/11 19:23:19 INFO SparkUI: Started SparkUI at http://hadoop108:4040
14/07/11 19:23:19 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/07/11 19:23:19 INFO SparkContext: Added JAR file:/home/spark1/app/spark-1.0.1-bin-2.3.0/lib/spark-examples-1.0.1-hadoop2.3.0.jar at
with timestamp 1
14/07/11 19:23:19 INFO AppClient$ClientActor: Connecting to master spark://hadoop108:7077...
14/07/11 19:23:20 INFO SparkContext: Starting job: reduce at SparkPi.scala:35
14/07/11 19:23:20 INFO DAGScheduler: Got job 0 (reduce at SparkPi.scala:35) with 1000 output partitions (allowLocal=false)
14/07/11 19:23:20 INFO DAGScheduler: Final stage: Stage 0(reduce at SparkPi.scala:35)
14/07/11 19:23:20 INFO DAGScheduler: Parents of final stage: List()
14/07/11 19:23:20 INFO DAGScheduler: Missing parents: List()
14/07/11 19:23:20 INFO DAGScheduler: Submitting Stage 0 (MappedRDD[1] at map at SparkPi.scala:31), which has no missing parents
14/07/11 19:23:20 INFO DAGScheduler: Submitting 1000 missing tasks from Stage 0 (MappedRDD[1] at map at SparkPi.scala:31)
14/07/11 19:23:20 INFO TaskSchedulerImpl: Adding task set 0.0 with 1000 tasks
14/07/11 19:23:35 WARN TaskSchedulerImpl: Initial job has not ac check your cluster UI to ensure that workers are registered and have sufficient memory
14/07/11 19:23:39 INFO AppClient$ClientActor: Connecting to master spark://hadoop108:7077...
14/07/11 19:23:50 WARN TaskSchedulerImpl: Initial job has not ac check your cluster UI to ensure that workers are registered and have sufficient memory
14/07/11 19:23:59 INFO AppClient$ClientActor: Connecting to master spark://hadoop108:7077...
14/07/11 19:24:05 WARN TaskSchedulerImpl: Initial job has not ac check your cluster UI to ensure that workers are registered and have sufficient memory
14/07/11 19:24:19 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: All masters are unresponsive! Giving up.
14/07/11 19:24:19 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
14/07/11 19:24:19 INFO TaskSchedulerImpl: Cancelling stage 0
14/07/11 19:24:19 INFO DAGScheduler: Failed to run reduce at SparkPi.scala:35
Exception in thread &main& org.apache.spark.SparkException: Job aborted due to stage failure: All masters are unresponsive! Giving up.
& && &&&at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1033)
& && &&&at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1017)
& && &&&at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1015)
& && &&&at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
& && &&&at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
& && &&&at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1015)
& && &&&at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:633)
& && &&&at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:633)
& && &&&at scala.Option.foreach(Option.scala:236)
& && &&&at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:633)
& && &&&at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1219)
& && &&&at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
& && &&&at akka.actor.ActorCell.invoke(ActorCell.scala:456)
& && &&&at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
& && &&&at akka.dispatch.Mailbox.run(Mailbox.scala:219)
& && &&&at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
& && &&&at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
& && &&&at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
& && &&&at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
& && &&&at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
[spark1@hadoop108 spark-1.0.1-bin-2.3.0]$
分析:感觉是资源不足引起的
当前测试:
export SPARK_WORKER_CORES=1
export SPARk_WORKER_INSTNCES=1
export SPARK_WORKER_MEMORY=500m
之后测试多种内存分配
export SPARK_WORKER_MEMORY=1000m
--executor-memory 1g
export SPARK_WORKER_MEMORY=2000m
--executor-memory 300m
export SPARK_WORKER_MEMORY=2000m
--executor-memory 1g
问题依旧。
请各位大侠高手帮忙看看。
论坛徽章:25
: Job aborted due to stage failure: All masters are unresponsive! Giving up.
复制代码看下你spark文件夹下面logs文件夹里的内容,master的那个,看下什么错误导致master失败的,
个人觉得可能是端口被占用引起的 我遇到过:参考:
中级会员, 积分 487, 距离下一级还需 13 积分
论坛徽章:3
logs目录下相关日志:
[spark1@hadoop108 logs]$ more spark-spark1-org.apache.spark.deploy.master.Master-1-hadoop108.out
Spark assembly has been built with Hive, including Datanucleus jars on classpath
Spark Command: /home/hadoop/app/jdk1.7.0_55/bin/java -cp ::/home/spark1/app/spark-1.0.1-bin-2.3.0/conf:/home/spark1/app/spark-1.0.1-bin-2.3.0/lib/spark-assembly
-1.0.1-hadoop2.3.0.jar:/home/spark1/app/spark-1.0.1-bin-2.3.0/lib/datanucleus-api-jdo-3.2.1.jar:/home/spark1/app/spark-1.0.1-bin-2.3.0/lib/datanucleus-rdbms-3.2
.1.jar:/home/spark1/app/spark-1.0.1-bin-2.3.0/lib/datanucleus-core-3.2.2.jar -XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m -Xmx512m org.apa
che.spark.deploy.master.Master --ip 10.1.253.108 --port 7077 --webui-port 8080
========================================
14/07/11 19:37:48 INFO SecurityManager: Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
14/07/11 19:37:48 INFO SecurityManager: Changing view acls to: spark1
14/07/11 19:37:48 INFO SecurityManager: SecurityManager: aut users with view permissions: Set(spark1)
14/07/11 19:37:49 INFO Slf4jLogger: Slf4jLogger started
14/07/11 19:37:49 INFO Remoting: Starting remoting
14/07/11 19:37:49 INFO Remoting: R listening on addresses :[akka.tcp://sparkMaster@10.1.253.108:7077]
14/07/11 19:37:49 INFO Master: Starting Spark master at spark://10.1.253.108:7077
14/07/11 19:37:49 INFO MasterWebUI: Started MasterWebUI at http://hadoop108:8080
14/07/11 19:37:49 INFO Master: I have been elected leader! New state: ALIVE
14/07/11 19:37:52 INFO Master: Registering worker hadoop108:54802 with 1 cores, 2000.0 MB RAM
14/07/11 19:38:23 ERROR EndpointWriter: dropping message [class akka.actor.SelectChildName] for non-local recipient [Actor[akka.tcp://sparkMaster@hadoop108:7077
/]] arriving at [akka.tcp://sparkMaster@hadoop108:7077] inbound addresses are [akka.tcp://sparkMaster@10.1.253.108:7077]
14/07/11 19:38:43 ERROR EndpointWriter: dropping message [class akka.actor.SelectChildName] for non-local recipient [Actor[akka.tcp://sparkMaster@hadoop108:7077
/]] arriving at [akka.tcp://sparkMaster@hadoop108:7077] inbound addresses are [akka.tcp://sparkMaster@10.1.253.108:7077]
14/07/11 19:39:03 ERROR EndpointWriter: dropping message [class akka.actor.SelectChildName] for non-local recipient [Actor[akka.tcp://sparkMaster@hadoop108:7077
/]] arriving at [akka.tcp://sparkMaster@hadoop108:7077] inbound addresses are [akka.tcp://sparkMaster@10.1.253.108:7077]
14/07/11 19:39:23 INFO Master: akka.tcp://spark@hadoop108:51011 got disassociated, removing it.
14/07/11 19:39:23 INFO Master: akka.tcp://spark@hadoop108:51011 got disassociated, removing it.
14/07/11 19:39:23 INFO LocalActorRef: Message [akka.remote.transport.ActorTransportAdapter$DisassociateUnderlying] from Actor[akka://sparkMaster/deadLetters] to
Actor[akka://sparkMaster/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FsparkMaster%.108%3A5301293] was not delivered
. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during
-shutdown'.
14/07/11 19:39:23 ERROR EndpointWriter: AssociationError [akka.tcp://sparkMaster@10.1.253.108:7077] -& [akka.tcp://spark@hadoop108:51011]: Error [Association fa
iled with [akka.tcp://spark@hadoop108:51011]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://spark@hadoop108:51011]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: 拒绝连接: hadoop108/10.1.253.108:51011
14/07/11 19:39:23 INFO Master: akka.tcp://spark@hadoop108:51011 got disassociated, removing it.
14/07/11 19:39:23 INFO Master: akka.tcp://spark@hadoop108:51011 got disassociated, removing it.
14/07/11 19:39:23 ERROR EndpointWriter: AssociationError [akka.tcp://sparkMaster@10.1.253.108:7077] -& [akka.tcp://spark@hadoop108:51011]: Error [Association fa
iled with [akka.tcp://spark@hadoop108:51011]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://spark@hadoop108:51011]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: 拒绝连接: hadoop108/10.1.253.108:51011
14/07/11 19:39:23 ERROR EndpointWriter: AssociationError [akka.tcp://sparkMaster@10.1.253.108:7077] -& [akka.tcp://spark@hadoop108:51011]: Error [Association fa
iled with [akka.tcp://spark@hadoop108:51011]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://spark@hadoop108:51011]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: 拒绝连接: hadoop108/10.1.253.108:51011
14/07/11 19:39:23 INFO Master: akka.tcp://spark@hadoop108:51011 got disassociated, removing it.
[spark1@hadoop108 logs]$
[spark1@hadoop108 logs]$ ls -ltr
总用量 104
-rw-rw-r-- 1 spark1 spark1&&7888 7月&&10 15:50 spark-spark1-org.apache.spark.deploy.worker.Worker-1-hadoop108.out.5
-rw-rw-r-- 1 spark1 spark1&&1188 7月&&10 17:58 spark-spark1-org.apache.spark.deploy.worker.Worker--hadoop108.out
-rw-rw-r-- 1 spark1 spark1&&1570 7月&&10 17:59 spark-spark1-org.apache.spark.deploy.master.Master-1-hadoop108.out.5
-rw-rw-r-- 1 spark1 spark1&&5123 7月&&11 11:08 spark-spark1-org.apache.spark.deploy.worker.Worker---master-hadoop108.out
-rw-rw-r-- 1 spark1 spark1&&1614 7月&&11 18:47 spark-spark1-org.apache.spark.deploy.worker.Worker-1-hadoop108.out.4
-rw-rw-r-- 1 spark1 spark1 11652 7月&&11 18:57 spark-spark1-org.apache.spark.deploy.master.Master-1-hadoop108.out.4
-rw-rw-r-- 1 spark1 spark1&&7882 7月&&11 19:15 spark-spark1-org.apache.spark.deploy.worker.Worker-1-hadoop108.out.3
-rw-rw-r-- 1 spark1 spark1 13274 7月&&11 19:21 spark-spark1-org.apache.spark.deploy.master.Master-1-hadoop108.out.3
-rw-rw-r-- 1 spark1 spark1&&1614 7月&&11 19:23 spark-spark1-org.apache.spark.deploy.worker.Worker-1-hadoop108.out.2
-rw-rw-r-- 1 spark1 spark1&&7829 7月&&11 19:27 spark-spark1-org.apache.spark.deploy.master.Master-1-hadoop108.out.2
-rw-rw-r-- 1 spark1 spark1&&1615 7月&&11 19:29 spark-spark1-org.apache.spark.deploy.worker.Worker-1-hadoop108.out.1
-rw-rw-r-- 1 spark1 spark1 10676 7月&&11 19:37 spark-spark1-org.apache.spark.deploy.master.Master-1-hadoop108.out.1
-rw-rw-r-- 1 spark1 spark1&&1615 7月&&11 19:37 spark-spark1-org.apache.spark.deploy.worker.Worker-1-hadoop108.out
-rw-rw-r-- 1 spark1 spark1&&4700 7月&&11 19:39 spark-spark1-org.apache.spark.deploy.master.Master-1-hadoop108.out
[spark1@hadoop108 logs]$ more spark-spark1-org.apache.spark.deploy.master.Master-1-hadoop108.out.1
Spark assembly has been built with Hive, including Datanucleus jars on classpath
Spark Command: /home/hadoop/app/jdk1.7.0_55/bin/java -cp ::/home/spark1/app/spark-1.0.1-bin-2.3.0/conf:/home/spark1/app/spark-1.0.1-bin-2.3.0/lib/spark-assembly
-1.0.1-hadoop2.3.0.jar:/home/spark1/app/spark-1.0.1-bin-2.3.0/lib/datanucleus-api-jdo-3.2.1.jar:/home/spark1/app/spark-1.0.1-bin-2.3.0/lib/datanucleus-rdbms-3.2
.1.jar:/home/spark1/app/spark-1.0.1-bin-2.3.0/lib/datanucleus-core-3.2.2.jar -XX:MaxPermSize=128m -Dspark.akka.logLifecycleEvents=true -Xms512m -Xmx512m org.apa
che.spark.deploy.master.Master --ip 10.1.253.108 --port 7077 --webui-port 8080
========================================
14/07/11 19:29:05 INFO SecurityManager: Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
14/07/11 19:29:05 INFO SecurityManager: Changing view acls to: spark1
14/07/11 19:29:05 INFO SecurityManager: SecurityManager: aut users with view permissions: Set(spark1)
14/07/11 19:29:05 INFO Slf4jLogger: Slf4jLogger started
14/07/11 19:29:05 INFO Remoting: Starting remoting
14/07/11 19:29:06 INFO Remoting: R listening on addresses :[akka.tcp://sparkMaster@10.1.253.108:7077]
14/07/11 19:29:06 INFO Master: Starting Spark master at spark://10.1.253.108:7077
14/07/11 19:29:06 INFO MasterWebUI: Started MasterWebUI at http://hadoop108:8080
14/07/11 19:29:06 INFO Master: I have been elected leader! New state: ALIVE
14/07/11 19:29:09 INFO Master: Registering worker hadoop108:35172 with 1 cores, 1000.0 MB RAM
14/07/11 19:29:40 ERROR EndpointWriter: dropping message [class akka.actor.SelectChildName] for non-local recipient [Actor[akka.tcp://sparkMaster@hadoop108:7077
/]] arriving at [akka.tcp://sparkMaster@hadoop108:7077] inbound addresses are [akka.tcp://sparkMaster@10.1.253.108:7077]
14/07/11 19:30:00 ERROR EndpointWriter: dropping message [class akka.actor.SelectChildName] for non-local recipient [Actor[akka.tcp://sparkMaster@hadoop108:7077
/]] arriving at [akka.tcp://sparkMaster@hadoop108:7077] inbound addresses are [akka.tcp://sparkMaster@10.1.253.108:7077]
14/07/11 19:30:20 ERROR EndpointWriter: dropping message [class akka.actor.SelectChildName] for non-local recipient [Actor[akka.tcp://sparkMaster@hadoop108:7077
/]] arriving at [akka.tcp://sparkMaster@hadoop108:7077] inbound addresses are [akka.tcp://sparkMaster@10.1.253.108:7077]
14/07/11 19:30:40 INFO Master: akka.tcp://spark@hadoop108:35039 got disassociated, removing it.
14/07/11 19:30:40 INFO Master: akka.tcp://spark@hadoop108:35039 got disassociated, removing it.
14/07/11 19:30:40 INFO LocalActorRef: Message [akka.remote.transport.ActorTransportAdapter$DisassociateUnderlying] from Actor[akka://sparkMaster/deadLetters] to
Actor[akka://sparkMaster/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FsparkMaster%.108%3A9634586] was not delivered
. [1] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during
-shutdown'.
14/07/11 19:30:40 INFO Master: akka.tcp://spark@hadoop108:35039 got disassociated, removing it.
14/07/11 19:30:40 ERROR EndpointWriter: AssociationError [akka.tcp://sparkMaster@10.1.253.108:7077] -& [akka.tcp://spark@hadoop108:35039]: Error [Association fa
iled with [akka.tcp://spark@hadoop108:35039]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://spark@hadoop108:35039]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: 拒绝连接: hadoop108/10.1.253.108:35039
14/07/11 19:30:40 INFO Master: akka.tcp://spark@hadoop108:35039 got disassociated, removing it.
14/07/11 19:30:40 ERROR EndpointWriter: AssociationError [akka.tcp://sparkMaster@10.1.253.108:7077] -& [akka.tcp://spark@hadoop108:35039]: Error [Association fa
iled with [akka.tcp://spark@hadoop108:35039]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://spark@hadoop108:35039]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: 拒绝连接: hadoop108/10.1.253.108:35039
14/07/11 19:30:40 INFO Master: akka.tcp://spark@hadoop108:35039 got disassociated, removing it.
14/07/11 19:30:40 ERROR EndpointWriter: AssociationError [akka.tcp://sparkMaster@10.1.253.108:7077] -& [akka.tcp://spark@hadoop108:35039]: Error [Association fa
iled with [akka.tcp://spark@hadoop108:35039]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://spark@hadoop108:35039]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: 拒绝连接: hadoop108/10.1.253.108:35039
14/07/11 19:30:50 ERROR EndpointWriter: dropping message [class akka.actor.SelectChildName] for non-local recipient [Actor[akka.tcp://sparkMaster@hadoop108:7077
/]] arriving at [akka.tcp://sparkMaster@hadoop108:7077] inbound addresses are [akka.tcp://sparkMaster@10.1.253.108:7077]
14/07/11 19:31:10 ERROR EndpointWriter: dropping message [class akka.actor.SelectChildName] for non-local recipient [Actor[akka.tcp://sparkMaster@hadoop108:7077
/]] arriving at [akka.tcp://sparkMaster@hadoop108:7077] inbound addresses are [akka.tcp://sparkMaster@10.1.253.108:7077]
14/07/11 19:31:30 ERROR EndpointWriter: dropping message [class akka.actor.SelectChildName] for non-local recipient [Actor[akka.tcp://sparkMaster@hadoop108:7077
/]] arriving at [akka.tcp://sparkMaster@hadoop108:7077] inbound addresses are [akka.tcp://sparkMaster@10.1.253.108:7077]
14/07/11 19:31:50 INFO Master: akka.tcp://spark@hadoop108:55843 got disassociated, removing it.
14/07/11 19:31:50 INFO Master: akka.tcp://spark@hadoop108:55843 got disassociated, removing it.
14/07/11 19:31:50 INFO LocalActorRef: Message [akka.remote.transport.ActorTransportAdapter$DisassociateUnderlying] from Actor[akka://sparkMaster/deadLetters] to
Actor[akka://sparkMaster/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FsparkMaster%.108%3A686119] was not delivered.
[2] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during-
shutdown'.
14/07/11 19:31:50 ERROR EndpointWriter: AssociationError [akka.tcp://sparkMaster@10.1.253.108:7077] -& [akka.tcp://spark@hadoop108:55843]: Error [Association fa
iled with [akka.tcp://spark@hadoop108:55843]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://spark@hadoop108:55843]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: 拒绝连接: hadoop108/10.1.253.108:55843
14/07/11 19:31:50 INFO Master: akka.tcp://spark@hadoop108:55843 got disassociated, removing it.
14/07/11 19:31:50 ERROR EndpointWriter: AssociationError [akka.tcp://sparkMaster@10.1.253.108:7077] -& [akka.tcp://spark@hadoop108:55843]: Error [Association fa
iled with [akka.tcp://spark@hadoop108:55843]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://spark@hadoop108:55843]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: 拒绝连接: hadoop108/10.1.253.108:55843
14/07/11 19:31:50 INFO Master: akka.tcp://spark@hadoop108:55843 got disassociated, removing it.
14/07/11 19:31:50 ERROR EndpointWriter: AssociationError [akka.tcp://sparkMaster@10.1.253.108:7077] -& [akka.tcp://spark@hadoop108:55843]: Error [Association fa
iled with [akka.tcp://spark@hadoop108:55843]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://spark@hadoop108:55843]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: 拒绝连接: hadoop108/10.1.253.108:55843
14/07/11 19:31:50 INFO Master: akka.tcp://spark@hadoop108:55843 got disassociated, removing it.
14/07/11 19:36:39 ERROR EndpointWriter: dropping message [class akka.actor.SelectChildName] for non-local recipient [Actor[akka.tcp://sparkMaster@hadoop108:7077
/]] arriving at [akka.tcp://sparkMaster@hadoop108:7077] inbound addresses are [akka.tcp://sparkMaster@10.1.253.108:7077]
14/07/11 19:36:59 ERROR EndpointWriter: dropping message [class akka.actor.SelectChildName] for non-local recipient [Actor[akka.tcp://sparkMaster@hadoop108:7077
/]] arriving at [akka.tcp://sparkMaster@hadoop108:7077] inbound addresses are [akka.tcp://sparkMaster@10.1.253.108:7077]
14/07/11 19:37:17 INFO Master: akka.tcp://spark@hadoop108:56063 got disassociated, removing it.
14/07/11 19:37:17 INFO Master: akka.tcp://spark@hadoop108:56063 got disassociated, removing it.
14/07/11 19:37:17 INFO LocalActorRef: Message [akka.remote.transport.ActorTransportAdapter$DisassociateUnderlying] from Actor[akka://sparkMaster/deadLetters] to
Actor[akka://sparkMaster/system/transports/akkaprotocolmanager.tcp0/akkaProtocol-tcp%3A%2F%2FsparkMaster%.108%3A496681] was not delivered
. [3] dead letters encountered. This logging can be turned off or adjusted with configuration settings 'akka.log-dead-letters' and 'akka.log-dead-letters-during
-shutdown'.
14/07/11 19:37:17 INFO Master: akka.tcp://spark@hadoop108:56063 got disassociated, removing it.
14/07/11 19:37:17 ERROR EndpointWriter: AssociationError [akka.tcp://sparkMaster@10.1.253.108:7077] -& [akka.tcp://spark@hadoop108:56063]: Error [Association fa
iled with [akka.tcp://spark@hadoop108:56063]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://spark@hadoop108:56063]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: 拒绝连接: hadoop108/10.1.253.108:56063
14/07/11 19:37:17 ERROR EndpointWriter: AssociationError [akka.tcp://sparkMaster@10.1.253.108:7077] -& [akka.tcp://spark@hadoop108:56063]: Error [Association fa
iled with [akka.tcp://spark@hadoop108:56063]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://spark@hadoop108:56063]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: 拒绝连接: hadoop108/10.1.253.108:56063
14/07/11 19:37:17 INFO Master: akka.tcp://spark@hadoop108:56063 got disassociated, removing it.
14/07/11 19:37:17 INFO Master: akka.tcp://spark@hadoop108:56063 got disassociated, removing it.
14/07/11 19:37:17 ERROR EndpointWriter: AssociationError [akka.tcp://sparkMaster@10.1.253.108:7077] -& [akka.tcp://spark@hadoop108:56063]: Error [Association fa
iled with [akka.tcp://spark@hadoop108:56063]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://spark@hadoop108:56063]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: 拒绝连接: hadoop108/10.1.253.108:56063
[spark1@hadoop108 logs]$
中级会员, 积分 487, 距离下一级还需 13 积分
论坛徽章:3
之后发现,有时执行 sbin/stop-all.sh worker停不了
[spark1@hadoop108 spark-1.0.1-bin-2.3.0]$ sbin/stop-master.sh
stopping org.apache.spark.deploy.master.Master
[spark1@hadoop108 spark-1.0.1-bin-2.3.0]$ jps
6145 sbt-launch-0.12.4.jar
373 SecondaryNameNode
32645 DataNode
660 NodeManager
5014 Worker
32544 NameNode
555 ResourceManager
[spark1@hadoop108 spark-1.0.1-bin-2.3.0]$
中级会员, 积分 241, 距离下一级还需 259 积分
论坛徽章:4
我也遇到这个问题了,解决了吗?
论坛徽章:25
tsingfu1986 发表于
之后发现,有时执行 sbin/stop-all.sh worker停不了
[spark1@hadoop108 spark-1.0.1-bin-2.3.0]$ sbin/sto ...
额 那只能用kill -9 pid 杀死worker这个进程了
论坛徽章:25
tsingfu1986 发表于
之后发现,有时执行 sbin/stop-all.sh worker停不了
[spark1@hadoop108 spark-1.0.1-bin-2.3.0]$ sbin/sto ...
而且你用的命令式stop-master&&这样只会去停止master啊 笔误??
新手上路, 积分 8, 距离下一级还需 42 积分
论坛徽章:0
您好,我现在也遇到了这个问题: scheduler.TaskSchedulerImpl: Initial job has not ac check your cluster UI to ensure that workers are registered and have sufficient memory
请问你是怎么解决的呢
注册会员, 积分 57, 距离下一级还需 143 积分
论坛徽章:1
楼主解决了吗
中级会员, 积分 487, 距离下一级还需 13 积分
论坛徽章:3
snow88 发表于
楼主解决了吗
时间相隔有些长,记得咨询老师时,当时主要的方向是调整spark-evn.sh中的master相关的参数、slaves文件和/etc/hosts文件的配置。
在此记录一个spark-shell遇到的类似问题,按以上解决方向调整测试,解决了问题,以供新人参考(还不清楚报错的底层原因)
spark-shell测试
sbin/start-master.sh
sbin/start-slaves.sh
bin/spark-shell --master spark://myubuntu:7077
http://localhost:4040
//测试1: parallelize演示
val num=sc.parallelize(1 to 10)
val doublenum=num.map(_*2)
val threenum=doublenum.filter(_%3==0)
threenum.collect
threenum.toDebugString
问题:执行acttion操作时提示内存不足
scala& threenum.collect
14/09/13 12:51:46 INFO spark.SparkContext: Starting job: collect at &console&:19
14/09/13 12:51:46 INFO scheduler.DAGScheduler: Got job 0 (collect at &console&:19) with 2 output partitions (allowLocal=false)
14/09/13 12:51:46 INFO scheduler.DAGScheduler: Final stage: Stage 0(collect at &console&:19)
14/09/13 12:51:46 INFO scheduler.DAGScheduler: Parents of final stage: List()
14/09/13 12:51:46 INFO scheduler.DAGScheduler: Missing parents: List()
14/09/13 12:51:46 INFO scheduler.DAGScheduler: Submitting Stage 0 (FilteredRDD[3] at filter at &console&:16), which has no missing parents
14/09/13 12:51:46 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from Stage 0 (FilteredRDD[3] at filter at &console&:16)
14/09/13 12:51:46 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
14/09/13 12:52:01 WARN scheduler.TaskSchedulerImpl: Initial job has not ac check your cluster UI to ensure that workers are registered and have sufficient memory
bin/spark-shell --master spark://myubuntu:7077 --executor-memory 300m
测试结果:问题依旧
发现master日志中有akka的错误
tail -f spark-hadoop-org.apache.spark.deploy.master.Master-1-myubuntu.out
14/09/13 14:57:49 WARN master.Master: Got status update for unknown executor app-34-0001/0
14/09/13 14:57:49 INFO master.Master: akka.tcp://spark@myubuntu:59265 got disassociated, removing it.
14/09/13 14:57:49 ERROR remote.EndpointWriter: AssociationError [akka.tcp://sparkMaster@myubuntu:7077] -& [akka.tcp://spark@myubuntu:59265]: Error [Association failed with [akka.tcp://spark@myubuntu:59265]] [
akka.remote.EndpointAssociationException: Association failed with [akka.tcp://spark@myubuntu:59265]
Caused by: akka.remote.transport.netty.NettyTransport$$anonfun$associate$1$$anon$2: 拒绝连接: myubuntu/10.0.0.1:59265
bin/spark-shell --master --executor-memory 300m
测试结果:正常
vi conf/spark-env.sh
SPARK_MASTER_IP=myubuntu
vi /etc/hosts
10.0.0.1& & & & myubuntu
重启master,slaves
启动spark-shell
bin/spark-shell --master spark://myubuntu:7077 --executor-memory 200m
测试结果:正常
扫一扫加入本版微信群}

我要回帖

更多关于 spark sbt打包 的文章

更多推荐

版权声明:文章内容来源于网络,版权归原作者所有,如有侵权请点击这里与我们联系,我们将及时删除。

点击添加站长微信