Connecting to external Spark


#1

Dockerized Seahorse, version 1.4.2, is running on LUBUNTU 10.17 Virtual Machine (VMWare).
While connecting to external Stand-alone Spark Cluster I receive this error on console: “sessionmanager_1 | INFO: [ERROR] [2018-01-09 14:50:44,900] [appclient-registration-retry-thread] org.apache.spark.scheduler.cluster.StandaloneSchedulerBackend - Application has been killed. Reason: All masters are unresponsive! Giving up.”

Application log file is empty (0 bytes).

Spark cluster settings are:
Master URL= spark://<ip_address_on_LAN>:<service_port>
User IP= <Virtual_Machine_ip_address>

Virtual machine ip address below to the same LAN of the spark cluster.

Spark Master and Slave are up and running (checked by web UI)

On the virtual machine is installed Spark too.
Using spark shell to submit job to setted master works fine.

Any suggestion?
Thanks


#2

I’m also having this problem.


#3

Can you please share Seahorse version, Spark version Seahorse was setup with, OS, Spark Standalone version?


#4

Seahorse Version 1.4.2.
Seahorse Spark Version ? (whatever we get with build/build_all.sh)
Standalone mode on cluster of Ubuntu 16.04.1 machines with Spark version 2.2.1.