SSH Tunnel - does not appear to be using tunnel for all connections
I've been testing out ssh tunnel feature for noSQL. My setup is as follows
LAPTOP -> JumpServer -> Mongo
By that i mean you am using a jumpserver to tunnel to and from their port forwarding to the actual mongo server. However from what I'm seeing in the logs. it appears the initial connection from noSQL is going thru the tunnel I think to do a discovery? the problem is once it has the replica members it then tries to connect without using the tunnel, which of course won't work because there is no direct access to Mongo from the Laptop.
For the first example i'm on public internet which means there is no access to the Mongo server directly we must go thru the jumpserver. Here are the logs from the Mongo server:
2019-08-01T00:03:22.184-0400 I NETWORK [listener] connection accepted from 10.1.0.5:44636 #11042 (365 connections now open) 2019-08-01T00:03:22.285-0400 W NETWORK [conn11042] no SSL certificate provided by peer 2019-08-01T00:03:22.321-0400 I NETWORK [conn11042] received client metadata from 10.1.0.5:44636 conn11042: { driver: { name: "nodejs", version: "3.1.7-1" }, os: { type: "Darwin", name: "darwin", architecture: "x64", version: "18.6.0" }, platform: "Node.js v10.2.0, LE, mongodb-core: 3.1.6", application: { name: "NoSQLBoosterV5_22485.805" } }2019-08-01T00:03:22.355-0400 I NETWORK [conn11042] end connection 10.1.0.5:44636 (364 connections now open)Notice the first incoming connection from 10.0.1.5 (jumpserver) where i assume it performs discovery of the replicaSet. This works and it closes then it should try to connect to each of the members ... but there are no connections the client dialog shows a failure and timeout
23:03:21.463 *** Starting "Connect to SSH Server" *** {"askForPassword":false,"authMode":2,"keyPath":"/Users/john.polansky/.ssh/id_rsa_XXXXX_john.polansky","passphrase":"","port":22,"server":"ssh01.support.XXXXX.io","username":"john.polansky"} 23:03:21.656 *** Finished "Connect to SSH Server" 193 ms*** {"askForPassword":false,"authMode":2,"keyPath":"/Users/john.polansky/.ssh/id_rsa_XXXXX_john.polansky","passphrase":"","port":22,"server":"ssh01.support.XXXXX.io","username":"john.polansky"} 23:03:21.657 *** Starting MongoClient.connect *** {"options":{"appname":"NoSQLBoosterV5_22485.805","autoReconnect":true,"checkServerIdentity":false,"keepAlive":60000,"minSize":0,"poolSize":0,"promoteLongs":false,"promoteValues":false,"reconnectTries":172800,"socketTimeoutMS":0,"ssl":true,"sslValidate":false,"useNewUrlParser":false},"uri":"mongodb://automation:XXXXX@localhost:58897/?authSource=automation&replicaSet=rs0&ssl=true"} 23:03:21.658 Topology Opening {"topologyId":9} 23:03:21.660 Server Opening {"address":"localhost:58897","topologyId":45} 23:03:22.340 Server Description Changed {"address":"localhost:58897","newDescription":{"address":"localhost:58897","arbiters":[],"hosts":[],"passives":[],"type":"RSPrimary"},"previousDescription":{"address":"localhost:58897","arbiters":[],"hosts":[],"passives":[],"type":"Unknown"},"topologyId":45} 23:03:22.340 Server Closed {"address":"localhost:58897","topologyId":45} 23:03:22.342 Server Opening {"address":"mongo01.zone1.XXXXX.io:27017","topologyId":46} 23:03:22.343 Server Opening {"address":"mongo01.zone2.XXXXX.io:27017","topologyId":47} 23:03:22.345 Server Opening {"address":"mongo01.zone3.XXXXX.io:27017","topologyId":48} 23:03:52.350 Topology Description Changed {"diff":{"servers":[]},"newDescription":{"servers":[],"setName":"rs0","topologyType":"Unknown"},"previousDescription":{"servers":[],"topologyType":"Unknown"},"topologyId":9} 23:03:52.350 Topology Closed {"topologyId":9} 23:03:52.351 Topology Description Changed {"diff":{"servers":[]},"newDescription":{"servers":[],"setName":"rs0","topologyType":"Unknown"},"previousDescription":{"servers":[],"setName":"rs0","topologyType":"Unknown"},"topologyId":9} 23:03:52.351 Topology Closed {"topologyId":9} 23:03:52.352 *** Connect Error *** {"errorLabels":["TransientTransactionError"],"message":"failed to connect to server [mongo01.zone3.XXXXX.io:27017] on first connect [MongoNetworkError: connection 37 to mongo01.zone3.XXXXX.io:27017 timed out]","name":"MongoNetworkError","stack":"MongoNetworkError: failed to connect to server [mongo01.zone3.XXXXX.io:27017] on first connect [MongoNetworkError: connection 37 to mongo01.zone3.XXXXX.io:27017 timed out]\n at Pool.<anonymous> (/Applications/NoSQLBooster for MongoDB.app/Contents/Resources/app.asar/node_modules/mongodb-core/lib/topologies/server.js:564:11)\n at Pool.emit (events.js:182:13)\n at Connection.<anonymous> (/Applications/NoSQLBooster for MongoDB.app/Contents/Resources/app.asar/node_modules/mongodb-core/lib/connection/pool.js:317:12)\n at Object.onceWrapper (events.js:273:13)\n at Connection.emit (events.js:182:13)\n at TLSSocket.<anonymous> (/Applications/NoSQLBooster for MongoDB.app/Contents/Resources/app.asar/node_modules/mongodb-core/lib/connection/connection.js:257:10)\n at Object.onceWrapper (events.js:273:13)\n at TLSSocket.emit (events.js:182:13)\n at TLSSocket.Socket._onTimeout (net.js:445:8)\n at ontimeout (timers.js:427:11)\n at tryOnTimeout (timers.js:289:5)\n at listOnTimeout (timers.js:252:5)\n at Timer.processTimers (timers.js:212:10)"}For this next example I'm connected to the VPN which means my laptop DOES have direct access to Mongo. Here are the same logs for connecting with no changes in the config.
2019-07-31T23:56:14.337-0400 I NETWORK [listener] connection accepted from 10.1.0.5:44628 #11019 (366 connections now open) 2019-07-31T23:56:14.499-0400 W NETWORK [conn11019] no SSL certificate provided by peer 2019-07-31T23:56:14.555-0400 I NETWORK [conn11019] received client metadata from 10.1.0.5:44628 conn11019: { driver: { name: "nodejs", version: "3.1.7-1" }, os: { type: "Darwin", name: "darwin", architecture: "x64", version: "18.6.0" }, platform: "Node.js v10.2.0, LE, mongodb-core: 3.1.6", application: { name: "NoSQLBoosterV5_22485.805" } } 2019-07-31T23:56:14.611-0400 I NETWORK [conn11019] end connection 10.1.0.5:44628 (365 connections now open) 2019-07-31T23:56:14.668-0400 I NETWORK [listener] connection accepted from 172.16.0.94:50430 #11020 (366 connections now open) 2019-07-31T23:56:14.723-0400 W NETWORK [conn11020] no SSL certificate provided by peer 2019-07-31T23:56:14.776-0400 I NETWORK [conn11020] received client metadata from 172.16.0.94:50430 conn11020: { driver: { name: "nodejs", version: "3.1.7-1" }, os: { type: "Darwin", name: "darwin", architecture: "x64", version: "18.6.0" }, platform: "Node.js v10.2.0, LE, mongodb-core: 3.1.6", application: { name: "NoSQLBoosterV5_22485.805" } } 2019-07-31T23:56:15.167-0400 I ACCESS [conn11020] Successfully authenticated as principal automation on automation2019-07-31T23:56:15.462-0400 I NETWORK [conn11020] end connection 172.16.0.94:50430 (365 connections now open)Here you can see the initial connection/discovery come in from 10.0.1.5 which is the jumpserver.
if I look in the logs of the primary mongo you can see the initial connection come in on 10.0.1.5 that is the IP address of the JumpServer which is perfect, it connects and works then u see a follow-up connection to the primary mongo BUT it's from 172.16.0.94 that is my VPN IP address which if I'm right means the subsequent connections are skipping the tunnel and going directly over the VPN to connect to Mongo, and it's successful. Here are the client logs.
23:09:51.662 *** Starting "Connect to SSH Server" *** {"askForPassword":false,"authMode":2,"keyPath":"/Users/john.polansky/.ssh/id_rsa_XXXXX_john.polansky","passphrase":"","port":22,"server":"ssh01.support.XXXXX.io","username":"john.polansky"} 23:09:51.864 *** Finished "Connect to SSH Server" 202 ms*** {"askForPassword":false,"authMode":2,"keyPath":"/Users/john.polansky/.ssh/id_rsa_XXXXX_john.polansky","passphrase":"","port":22,"server":"ssh01.support.XXXXX.io","username":"john.polansky"} 23:09:51.865 *** Starting MongoClient.connect *** {"options":{"appname":"NoSQLBoosterV5_22485.805","autoReconnect":true,"checkServerIdentity":false,"keepAlive":60000,"minSize":0,"poolSize":0,"promoteLongs":false,"promoteValues":false,"reconnectTries":172800,"socketTimeoutMS":0,"ssl":true,"sslValidate":false,"useNewUrlParser":false},"uri":"mongodb://automation:XXXXX@localhost:59313/?authSource=automation&replicaSet=rs0&ssl=true"} 23:09:51.866 Topology Opening {"topologyId":10} 23:09:51.867 Server Opening {"address":"localhost:59313","topologyId":50} 23:09:52.843 Server Description Changed {"address":"localhost:59313","newDescription":{"address":"localhost:59313","arbiters":[],"hosts":[],"passives":[],"type":"RSPrimary"},"previousDescription":{"address":"localhost:59313","arbiters":[],"hosts":[],"passives":[],"type":"Unknown"},"topologyId":50} 23:09:52.843 Server Closed {"address":"localhost:59313","topologyId":50} 23:09:52.845 Server Opening {"address":"mongo01.zone1.XXXXX.io:27017","topologyId":51} 23:09:52.846 Server Opening {"address":"mongo01.zone2.XXXXX.io:27017","topologyId":52} 23:09:52.848 Server Opening {"address":"mongo01.zone3.XXXXX.io:27017","topologyId":53} 23:09:53.063 Server Description Changed {"address":"mongo01.zone1.XXXXX.io:27017","newDescription":{"address":"mongo01.zone1.XXXXX.io:27017","arbiters":[],"hosts":[],"passives":[],"type":"RSPrimary"},"previousDescription":{"address":"mongo01.zone1.XXXXX.io:27017","arbiters":[],"hosts":[],"passives":[],"type":"Unknown"},"topologyId":51} 23:09:53.064 Joined "primary" {"$clusterTime":{"clusterTime":{"_bsontype":"Timestamp","high_":1564632592,"low_":1},"signature":{"hash":{"_bsontype":"Binary","buffer":{"type":"Buffer","data":[91,73,25,182,138,249,25,114,243,222,16,219,234,81,133,195,169,209,119,177]},"position":20,"sub_type":0},"keyId":{"_bsontype":"Long","high_":1549852830,"low_":1}}},"electionId":{"_bsontype":"ObjectID","id":{"type":"Buffer","data":[127,255,255,255,0,0,0,0,0,0,0,44]}},"hosts":["mongo01.zone1.XXXXX.io:27017","mongo01.zone2.XXXXX.io:27017","mongo01.zone3.XXXXX.io:27017"],"ismaster":true,"lastWrite":{"lastWriteDate":"2019-08-01T04:09:52.000Z","majorityOpTime":{"t":44,"ts":{"_bsontype":"Timestamp","high_":1564632592,"low_":1}},"majorityWriteDate":"2019-08-01T04:09:52.000Z","opTime":{"t":44,"ts":{"_bsontype":"Timestamp","high_":1564632592,"low_":1}}},"localTime":"2019-08-01T04:09:53.035Z","logicalSessionTimeoutMinutes":30,"maxBsonObjectSize":16777216,"maxMessageSizeBytes":48000000,"maxWireVersion":6,"maxWriteBatchSize":100000,"me":"mongo01.zone1.XXXXX.io:27017","minWireVersion":0,"ok":1,"operationTime":{"_bsontype":"Timestamp","high_":1564632592,"low_":1},"primary":"mongo01.zone1.XXXXX.io:27017","readOnly":false,"secondary":false,"setName":"rs0","setVersion":7} 23:09:53.064 Topology Description Changed {"diff":{"servers":[{"address":"mongo01.zone1.XXXXX.io:27017","from":"Unknown","to":"RSPrimary"}]},"newDescription":{"servers":[{"address":"mongo01.zone1.XXXXX.io:27017","hosts":["mongo01.zone1.XXXXX.io:27017","mongo01.zone2.XXXXX.io:27017","mongo01.zone3.XXXXX.io:27017"],"setName":"rs0","type":"RSPrimary"}],"setName":"rs0","topologyType":"Unknown"},"previousDescription":{"servers":[],"topologyType":"Unknown"},"topologyId":10} 23:09:53.079 Server Description Changed {"address":"mongo01.zone3.XXXXX.io:27017","newDescription":{"address":"mongo01.zone3.XXXXX.io:27017","arbiters":[],"hosts":[],"passives":[],"type":"RSSecondary"},"previousDescription":{"address":"mongo01.zone3.XXXXX.io:27017","arbiters":[],"hosts":[],"passives":[],"type":"Unknown"},"topologyId":53} 23:09:53.079 Joined "secondary" {"$clusterTime":{"clusterTime":{"_bsontype":"Timestamp","high_":1564632592,"low_":1},"signature":{"hash":{"_bsontype":"Binary","buffer":{"type":"Buffer","data":[91,73,25,182,138,249,25,114,243,222,16,219,234,81,133,195,169,209,119,177]},"position":20,"sub_type":0},"keyId":{"_bsontype":"Long","high_":1549852830,"low_":1}}},"hosts":["mongo01.zone1.XXXXX.io:27017","mongo01.zone2.XXXXX.io:27017","mongo01.zone3.XXXXX.io:27017"],"ismaster":false,"lastWrite":{"lastWriteDate":"2019-08-01T04:09:52.000Z","majorityOpTime":{"t":44,"ts":{"_bsontype":"Timestamp","high_":1564632592,"low_":1}},"majorityWriteDate":"2019-08-01T04:09:52.000Z","opTime":{"t":44,"ts":{"_bsontype":"Timestamp","high_":1564632592,"low_":1}}},"localTime":"2019-08-01T04:09:53.047Z","logicalSessionTimeoutMinutes":30,"maxBsonObjectSize":16777216,"maxMessageSizeBytes":48000000,"maxWireVersion":6,"maxWriteBatchSize":100000,"me":"mongo01.zone3.XXXXX.io:27017","minWireVersion":0,"ok":1,"operationTime":{"_bsontype":"Timestamp","high_":1564632592,"low_":1},"primary":"mongo01.zone1.XXXXX.io:27017","readOnly":false,"secondary":true,"setName":"rs0","setVersion":7} 23:09:53.079 Topology Description Changed {"diff":{"servers":[{"address":"mongo01.zone3.XXXXX.io:27017","from":"Unknown","to":"RSSecondary"}]},"newDescription":{"servers":[{"address":"mongo01.zone1.XXXXX.io:27017","hosts":["mongo01.zone1.XXXXX.io:27017","mongo01.zone2.XXXXX.io:27017","mongo01.zone3.XXXXX.io:27017"],"setName":"rs0","type":"RSPrimary"},{"address":"mongo01.zone3.XXXXX.io:27017","hosts":["mongo01.zone1.XXXXX.io:27017","mongo01.zone2.XXXXX.io:27017","mongo01.zone3.XXXXX.io:27017"],"setName":"rs0","type":"RSSecondary"}],"setName":"rs0","topologyType":"ReplicaSetWithPrimary"},"previousDescription":{"servers":[{"address":"mongo01.zone1.XXXXX.io:27017","hosts":["mongo01.zone1.XXXXX.io:27017","mongo01.zone2.XXXXX.io:27017","mongo01.zone3.XXXXX.io:27017"],"setName":"rs0","type":"RSPrimary"}],"setName":"rs0","topologyType":"Unknown"},"topologyId":10} 23:09:53.231 Server Description Changed {"address":"mongo01.zone2.XXXXX.io:27017","newDescription":{"address":"mongo01.zone2.XXXXX.io:27017","arbiters":[],"hosts":[],"passives":[],"type":"RSSecondary"},"previousDescription":{"address":"mongo01.zone2.XXXXX.io:27017","arbiters":[],"hosts":[],"passives":[],"type":"Unknown"},"topologyId":52} 23:09:53.231 Joined "secondary" {"$clusterTime":{"clusterTime":{"_bsontype":"Timestamp","high_":1564632592,"low_":1},"signature":{"hash":{"_bsontype":"Binary","buffer":{"type":"Buffer","data":[91,73,25,182,138,249,25,114,243,222,16,219,234,81,133,195,169,209,119,177]},"position":20,"sub_type":0},"keyId":{"_bsontype":"Long","high_":1549852830,"low_":1}}},"hosts":["mongo01.zone1.XXXXX.io:27017","mongo01.zone2.XXXXX.io:27017","mongo01.zone3.XXXXX.io:27017"],"ismaster":false,"lastWrite":{"lastWriteDate":"2019-08-01T04:09:52.000Z","majorityOpTime":{"t":44,"ts":{"_bsontype":"Timestamp","high_":1564632592,"low_":1}},"majorityWriteDate":"2019-08-01T04:09:52.000Z","opTime":{"t":44,"ts":{"_bsontype":"Timestamp","high_":1564632592,"low_":1}}},"localTime":"2019-08-01T04:09:53.181Z","logicalSessionTimeoutMinutes":30,"maxBsonObjectSize":16777216,"maxMessageSizeBytes":48000000,"maxWireVersion":6,"maxWriteBatchSize":100000,"me":"mongo01.zone2.XXXXX.io:27017","minWireVersion":0,"ok":1,"operationTime":{"_bsontype":"Timestamp","high_":1564632592,"low_":1},"primary":"mongo01.zone1.XXXXX.io:27017","readOnly":false,"secondary":true,"setName":"rs0","setVersion":7} 23:09:53.231 Topology Description Changed {"diff":{"servers":[{"address":"mongo01.zone2.XXXXX.io:27017","from":"Unknown","to":"RSSecondary"}]},"newDescription":{"servers":[{"address":"mongo01.zone1.XXXXX.io:27017","hosts":["mongo01.zone1.XXXXX.io:27017","mongo01.zone2.XXXXX.io:27017","mongo01.zone3.XXXXX.io:27017"],"setName":"rs0","type":"RSPrimary"},{"address":"mongo01.zone3.XXXXX.io:27017","hosts":["mongo01.zone1.XXXXX.io:27017","mongo01.zone2.XXXXX.io:27017","mongo01.zone3.XXXXX.io:27017"],"setName":"rs0","type":"RSSecondary"},{"address":"mongo01.zone2.XXXXX.io:27017","hosts":["mongo01.zone1.XXXXX.io:27017","mongo01.zone2.XXXXX.io:27017","mongo01.zone3.XXXXX.io:27017"],"setName":"rs0","type":"RSSecondary"}],"setName":"rs0","topologyType":"ReplicaSetWithPrimary"},"previousDescription":{"servers":[{"address":"mongo01.zone1.XXXXX.io:27017","hosts":["mongo01.zone1.XXXXX.io:27017","mongo01.zone2.XXXXX.io:27017","mongo01.zone3.XXXXX.io:27017"],"setName":"rs0","type":"RSPrimary"},{"address":"mongo01.zone3.XXXXX.io:27017","hosts":["mongo01.zone1.XXXXX.io:27017","mongo01.zone2.XXXXX.io:27017","mongo01.zone3.XXXXX.io:27017"],"setName":"rs0","type":"RSSecondary"}],"setName":"rs0","topologyType":"ReplicaSetWithPrimary"},"topologyId":10} 23:09:53.231 Server Heartbeat Started {"connectionId":"mongo01.zone1.XXXXX.io:27017"} 23:09:53.232 Server Heartbeat Started {"connectionId":"mongo01.zone2.XXXXX.io:27017"} 23:09:53.232 Server Heartbeat Started {"connectionId":"mongo01.zone3.XXXXX.io:27017"} 23:09:53.286 Server Heartbeat Succeeded {"connectionId":"mongo01.zone1.XXXXX.io:27017","durationMS":55,"reply":{"$clusterTime":{"clusterTime":{"_bsontype":"Timestamp","high_":1564632592,"low_":1},"signature":{"hash":{"_bsontype":"Binary","buffer":{"type":"Buffer","data":[91,73,25,182,138,249,25,114,243,222,16,219,234,81,133,195,169,209,119,177]},"position":20,"sub_type":0},"keyId":{"_bsontype":"Long","high_":1549852830,"low_":1}}},"electionId":{"_bsontype":"ObjectID","id":{"type":"Buffer","data":[127,255,255,255,0,0,0,0,0,0,0,44]}},"hosts":["mongo01.zone1.XXXXX.io:27017","mongo01.zone2.XXXXX.io:27017","mongo01.zone3.XXXXX.io:27017"],"ismaster":true,"lastWrite":{"lastWriteDate":"2019-08-01T04:09:52.000Z","majorityOpTime":{"t":44,"ts":{"_bsontype":"Timestamp","high_":1564632592,"low_":1}},"majorityWriteDate":"2019-08-01T04:09:52.000Z","opTime":{"t":44,"ts":{"_bsontype":"Timestamp","high_":1564632592,"low_":1}}},"localTime":"2019-08-01T04:09:53.257Z","logicalSessionTimeoutMinutes":30,"maxBsonObjectSize":16777216,"maxMessageSizeBytes":48000000,"maxWireVersion":6,"maxWriteBatchSize":100000,"me":"mongo01.zone1.XXXXX.io:27017","minWireVersion":0,"ok":1,"operationTime":{"_bsontype":"Timestamp","high_":1564632592,"low_":1},"primary":"mongo01.zone1.XXXXX.io:27017","readOnly":false,"secondary":false,"setName":"rs0","setVersion":7}} 23:09:53.289 Server Heartbeat Succeeded {"connectionId":"mongo01.zone3.XXXXX.io:27017","durationMS":57,"reply":{"$clusterTime":{"clusterTime":{"_bsontype":"Timestamp","high_":1564632592,"low_":1},"signature":{"hash":{"_bsontype":"Binary","buffer":{"type":"Buffer","data":[91,73,25,182,138,249,25,114,243,222,16,219,234,81,133,195,169,209,119,177]},"position":20,"sub_type":0},"keyId":{"_bsontype":"Long","high_":1549852830,"low_":1}}},"hosts":["mongo01.zone1.XXXXX.io:27017","mongo01.zone2.XXXXX.io:27017","mongo01.zone3.XXXXX.io:27017"],"ismaster":false,"lastWrite":{"lastWriteDate":"2019-08-01T04:09:52.000Z","majorityOpTime":{"t":44,"ts":{"_bsontype":"Timestamp","high_":1564632592,"low_":1}},"majorityWriteDate":"2019-08-01T04:09:52.000Z","opTime":{"t":44,"ts":{"_bsontype":"Timestamp","high_":1564632592,"low_":1}}},"localTime":"2019-08-01T04:09:53.257Z","logicalSessionTimeoutMinutes":30,"maxBsonObjectSize":16777216,"maxMessageSizeBytes":48000000,"maxWireVersion":6,"maxWriteBatchSize":100000,"me":"mongo01.zone3.XXXXX.io:27017","minWireVersion":0,"ok":1,"operationTime":{"_bsontype":"Timestamp","high_":1564632592,"low_":1},"primary":"mongo01.zone1.XXXXX.io:27017","readOnly":false,"secondary":true,"setName":"rs0","setVersion":7}} 23:09:53.327 Server Heartbeat Succeeded {"connectionId":"mongo01.zone2.XXXXX.io:27017","durationMS":95,"reply":{"$clusterTime":{"clusterTime":{"_bsontype":"Timestamp","high_":1564632592,"low_":1},"signature":{"hash":{"_bsontype":"Binary","buffer":{"type":"Buffer","data":[91,73,25,182,138,249,25,114,243,222,16,219,234,81,133,195,169,209,119,177]},"position":20,"sub_type":0},"keyId":{"_bsontype":"Long","high_":1549852830,"low_":1}}},"hosts":["mongo01.zone1.XXXXX.io:27017","mongo01.zone2.XXXXX.io:27017","mongo01.zone3.XXXXX.io:27017"],"ismaster":false,"lastWrite":{"lastWriteDate":"2019-08-01T04:09:52.000Z","majorityOpTime":{"t":44,"ts":{"_bsontype":"Timestamp","high_":1564632592,"low_":1}},"majorityWriteDate":"2019-08-01T04:09:52.000Z","opTime":{"t":44,"ts":{"_bsontype":"Timestamp","high_":1564632592,"low_":1}}},"localTime":"2019-08-01T04:09:53.277Z","logicalSessionTimeoutMinutes":30,"maxBsonObjectSize":16777216,"maxMessageSizeBytes":48000000,"maxWireVersion":6,"maxWriteBatchSize":100000,"me":"mongo01.zone2.XXXXX.io:27017","minWireVersion":0,"ok":1,"operationTime":{"_bsontype":"Timestamp","high_":1564632592,"low_":1},"primary":"mongo01.zone1.XXXXX.io:27017","readOnly":false,"secondary":true,"setName":"rs0","setVersion":7}} 23:09:53.575 *** Finished MongoClient.connect 1710 ms *** {"options":{"appname":"NoSQLBoosterV5_22485.805","autoReconnect":true,"checkServerIdentity":false,"keepAlive":60000,"minSize":0,"poolSize":0,"promoteLongs":false,"promoteValues":false,"reconnectTries":172800,"socketTimeoutMS":0,"ssl":true,"sslValidate":false,"useNewUrlParser":false},"uri":"mongodb://automation:XXXXX@localhost:59313/?authSource=automation&replicaSet=rs0&ssl=true"} 23:09:53.575 *** Starting Detecting MongoDB Server BuildInfo *** 23:09:53.631 *** Finished Detecting MongoDB Server BuildInfo 56 ms*** {"dbVersion":"3.6.6","isEnterprise":false} 23:09:53.632 Server Closed {"address":"mongo01.zone1.XXXXX.io:27017","topologyId":51} 23:09:53.632 Server Closed {"address":"mongo01.zone3.XXXXX.io:27017","topologyId":53} 23:09:53.632 Server Closed {"address":"mongo01.zone2.XXXXX.io:27017","topologyId":52} 23:09:53.632 Topology Description Changed {"diff":{"servers":[{"address":"mongo01.zone1.XXXXX.io:27017","from":"RSPrimary","to":"Unknown"},{"address":"mongo01.zone3.XXXXX.io:27017","from":"RSSecondary","to":"Unknown"},{"address":"mongo01.zone2.XXXXX.io:27017","from":"RSSecondary","to":"Unknown"}]},"newDescription":{"servers":[],"setName":"rs0","topologyType":"Unknown"},"previousDescription":{"servers":[{"address":"mongo01.zone1.XXXXX.io:27017","hosts":["mongo01.zone1.XXXXX.io:27017","mongo01.zone2.XXXXX.io:27017","mongo01.zone3.XXXXX.io:27017"],"setName":"rs0","type":"RSPrimary"},{"address":"mongo01.zone3.XXXXX.io:27017","hosts":["mongo01.zone1.XXXXX.io:27017","mongo01.zone2.XXXXX.io:27017","mongo01.zone3.XXXXX.io:27017"],"setName":"rs0","type":"RSSecondary"},{"address":"mongo01.zone2.XXXXX.io:27017","hosts":["mongo01.zone1.XXXXX.io:27017","mongo01.zone2.XXXXX.io:27017","mongo01.zone3.XXXXX.io:27017"],"setName":"rs0","type":"RSSecondary"}],"setName":"rs0","topologyType":"ReplicaSetWithPrimary"},"topologyId":10} 23:09:53.633 Topology Closed {"topologyId":10}
I'm sorry that was a lot of info but I wanted to show u all my findings so u can replicate.. Based on what I'm seeing the SSH Tunnel is only being used for the initial connection. After that it skips the tunnel which I'm pretty sure is a bug.
I will show you one more example. I cloned my Mongo Config and switched it to a standalone single mongo (even though it is a replica) and guess what.. It works.. Here are the mongo logs:
2019-08-01T00:13:35.548-0400 I NETWORK [listener] connection accepted from 10.1.0.5:44650 #11066 (365 connections now open) 2019-08-01T00:13:35.714-0400 W NETWORK [conn11066] no SSL certificate provided by peer 2019-08-01T00:13:35.770-0400 I NETWORK [conn11066] received client metadata from 10.1.0.5:44650 conn11066: { driver: { name: "nodejs", version: "3.1.7-1" }, os: { type: "Darwin", name: "darwin", architecture: "x64", version: "18.6.0" }, platform: "Node.js v10.2.0, LE, mongodb-core: 3.1.6", application: { name: "NoSQLBoosterV5_22485.805" } } 2019-08-01T00:13:35.938-0400 I ACCESS [conn11066] Successfully authenticated as principal automation on automation 2019-08-01T00:13:36.108-0400 I NETWORK [conn11066] end connection 10.1.0.5:44650 (364 connections now open)Connection comes in from 10.1.0.5 and that it connects and works to open the database (granted it's a direct connection not a replicaSet)
So I think this means there is a bug in the replicaSet code where it has to do the direct connection to perform discovery.
I can get by for now with a direct connection but obviously that isn't ideal. Please let me know how I can help troubleshoot. Obviously if you test this out you'll need to block direct connections from your client to the Mongo to simulate it I think.
Thank you for your feedback.
We have worked out a new build to resolve the replica set over SSH tunnel issue, please download and give it a try.
MacOS: https://nosqlbooster.com/s3/download/releasesv5/nosqlbooster4mongo-5.1.13-beta.1.dmg
Thank you for your feedback.
We have worked out a new build to resolve the replica set over SSH tunnel issue, please download and give it a try.
MacOS: https://nosqlbooster.com/s3/download/releasesv5/nosqlbooster4mongo-5.1.13-beta.1.dmg
Wow that was amazing feedback and a build to test. However I think I have bad news.. at least as I see it.
To start with it now appears a replicaSet will connect over an SSH Tunnel. BUT from what I can see you are only connecting to the first database in the list and not to all of them to discover who is primary. This means that if the first mongo u have listed in the replicaSet is the primary you are connect to only that server (not the others0 and everything does "appear" to work you can make write up dates.
However if you delete the members and add a secondary as the first one and connect and you monitor the mongo logs all you see is a single connection from NoSQL to the secondary mongo it never discovers the other two to find out which is the primary. I believe this is a problem because noSQL then just blindly connects to the secondary and acts as if everything is normal. Which means your writes will fail since you are on a read-only mongo.
This was from trying to create a collection as you can see we are attached to the wrong Mongo.
I think this image displays the issue perfectly, as u can see from the [ME] noSQL is connected to the Secondary which is not correct to my knowledge with replicaSet it should automatically connect to the primary by discovery which is primary and then connecting.
Thank you so much for the quick update but I don't think we've resolved this quite right. It almost looks like you've just removed the discovery function and allowed it to connect to the first mongo since that one was using the tunnel.
Please let me know if i can provide more details
Wow that was amazing feedback and a build to test. However I think I have bad news.. at least as I see it.
To start with it now appears a replicaSet will connect over an SSH Tunnel. BUT from what I can see you are only connecting to the first database in the list and not to all of them to discover who is primary. This means that if the first mongo u have listed in the replicaSet is the primary you are connect to only that server (not the others0 and everything does "appear" to work you can make write up dates.
However if you delete the members and add a secondary as the first one and connect and you monitor the mongo logs all you see is a single connection from NoSQL to the secondary mongo it never discovers the other two to find out which is the primary. I believe this is a problem because noSQL then just blindly connects to the secondary and acts as if everything is normal. Which means your writes will fail since you are on a read-only mongo.
This was from trying to create a collection as you can see we are attached to the wrong Mongo.
I think this image displays the issue perfectly, as u can see from the [ME] noSQL is connected to the Secondary which is not correct to my knowledge with replicaSet it should automatically connect to the primary by discovery which is primary and then connecting.
Thank you so much for the quick update but I don't think we've resolved this quite right. It almost looks like you've just removed the discovery function and allowed it to connect to the first mongo since that one was using the tunnel.
Please let me know if i can provide more details
We worked out a new build to resolve the replicaSet discovery cache issue. Please try again.
MacOS: https://nosqlbooster.com/s3/download/releasesv5/nosqlbooster4mongo-5.1.13-beta.2.dmg
And, more about "can't connect replica set over SSH tunnel", please refer to https://mongobooster.useresponse.com/topic/ssh-tunnel-not-working
MongoDB Nodejs 2.1 or plus can not connect MongoDB Replicaset over SSH like this. Thus, replica set members must be reachable from the client by the hostnames listed in the replica set config. In the current beta release, NoSQLBooster discovery replica set first, then connect directly to a single known member of the set. This is just a work-around solution at the cost of losing the functionality of nodeJS driver replica set failover.
We worked out a new build to resolve the replicaSet discovery cache issue. Please try again.
MacOS: https://nosqlbooster.com/s3/download/releasesv5/nosqlbooster4mongo-5.1.13-beta.2.dmg
And, more about "can't connect replica set over SSH tunnel", please refer to https://mongobooster.useresponse.com/topic/ssh-tunnel-not-working
MongoDB Nodejs 2.1 or plus can not connect MongoDB Replicaset over SSH like this. Thus, replica set members must be reachable from the client by the hostnames listed in the replica set config. In the current beta release, NoSQLBooster discovery replica set first, then connect directly to a single known member of the set. This is just a work-around solution at the cost of losing the functionality of nodeJS driver replica set failover.
Excellent the new BETA does appear to now check for the primary and perform a direct connection to the PRIMARY. I've read your articles.. I think I follow the limitations. I notice those articles are 1.5 years old, and NodeJS Driver for Mongo has had a 3.X release out for several years now. Do you know when you plan to upgrade of if there is a beta program that could be entered?
Thanks for the quick response times.
John
Excellent the new BETA does appear to now check for the primary and perform a direct connection to the PRIMARY. I've read your articles.. I think I follow the limitations. I notice those articles are 1.5 years old, and NodeJS Driver for Mongo has had a 3.X release out for several years now. Do you know when you plan to upgrade of if there is a beta program that could be entered?
Thanks for the quick response times.
John
As it's limited by MongoDB Server selection specification, NodeJS Driver 3.x can't connect MongoDB Replicaset over SSH like this. Mongo shell and Mongodb's own GUI compass also do not support this way of connection.
We'll update it in a month or two. It can also be upgraded until MongoDB 4.2 release.
We don't have a public beta program yet.
As it's limited by MongoDB Server selection specification, NodeJS Driver 3.x can't connect MongoDB Replicaset over SSH like this. Mongo shell and Mongodb's own GUI compass also do not support this way of connection.
We'll update it in a month or two. It can also be upgraded until MongoDB 4.2 release.
We don't have a public beta program yet.
Resolved in 5.1.13
Resolved in 5.1.13
Replies have been locked on this page!