Questions on starting Locator using snappydata/bin> ./spark-shell.sh script -


spark v. 0.5

here's command used start locator:

ubuntu@ip-172-31-8-115:/snappydata-0.5-bin/bin$ ./snappy-shell locator start  starting snappydata locator using peer discovery on: 0.0.0.0[10334] starting drda server snappydata @ address localhost/127.0.0.1[1527]  logs generated in /snappydata-0.5-bin/bin/snappylocator.log  snappydata locator pid: 9352 status: running 

it looks starts drda server locally, no outside interface client connect to. so, cannot reach snappydata locator using jdbc url outside client host (e.g. squirrelsql editor).

this not connect:

jdbc:snappydata://my-aws-public-ip-here:1527/ 

what property pass ./snappy-shell.sh location start command drda server start on public ip address instead of "localhost/127.0.0.1"?

use -client-bind-address , -client-port options. locator use -peer-discovery-address , -peer-discovery-port options specify bind address other locators/servers/leads (that passed -locators=<address>:<port>):

snappy-shell locator start -peer-discovery-address=<internal ip peers> -client-bind-address=<public ip clients> 

see output of snappy-shell locator --help commonly used options.

for snappydata releases, may find easier use global configuration of locators, servers, leads. check configuring cluster.

this allow specifying options jvms of cluster in conf/locators, conf/leads, conf/servers starting snappy-start-all.sh, status snappy-status-all.sh , stop snappy-stop-all.sh


Comments