NoSQLBoooster runs out of Memory => blank screen

Christopher Krause shared this problem 6 years ago
Solved

10 JSONs (each 7.5 MB) in a collection kills the NoSQLBooster, when

trying to run something like "db.mycollection.find({})"

I startet in command shell and got this:

Update for version 4.5.0 is not available (latest version: 4.5.0, downgrade is disallowed.

#

# Fatal error in , line 0

# API fatal error handler returned after process out of memory

#

Backtrace:

GetHandleVerifier [0x01620717+17767]

GetHandleVerifier [0x0167996A+382906]

V8_Fatal [0x0FE36783+83]

v8::SharedArrayBuffer::Externalize [0x0FDC78B2+354]

v8::internal::Builtins::WasmStackGuard [0x100F06B2+53842]

v8::internal::AllocationSpaceName [0x100FACC7+42471]

v8::internal::Builtins::WasmStackGuard [0x100ED9B8+42328]

v8::internal::AllocationSpaceName [0x100F31B3+10963]

v8::internal::AllocationSpaceName [0x100F9298+35768]

GetHandleVerifier [0x016AF24B+602267]

RtlClearAllBits [0x77057E5F+272]

TpCallbackIndependent [0x770408F1+1808]

BaseThreadInitThunk [0x7490336A+18]

RtlInitializeExceptionChain [0x770298F2+99]

RtlInitializeExceptionChain [0x770298C5+54]


---

First view tries I did not even get this message - also not in the command shell.


Other viewer (like Studio 3T) shows everything correct, so I do not think it is the data.

Replies (9)

photo
1

Please disable the following option and try it again.

Menu -> Options -> Options That May Affect Performance -> Enable Fields/Collection Auto Complete.

And, Which OS are you using? Is possible, could you please send me a sample JSON file and detailed steps to recall the issue?

You can downgrade to 4.1.3 from the following URLS:

Win: https://nosqlbooster.com/s3/download/4.1/mongobooster-4.1.3.exe

Mac: https://nosqlbooster.com/s3/download/4.1/mongobooster-4.1.3.dmg

Linux: https://nosqlbooster.com/s3/download/4.1/mongobooster-4.1.3-x86_64.AppImage

photo
1

Disabling this option does not change anything.

OS: Windows 7

With theses JSONs it is a performance issue. As said RoboT opens the collection within a second. But I like your code

completion much more.


On OS Linux it is nearly the same:

#

# Fatal error in , line 0

# API fatal error handler returned after process out of memory

#


==== C stack trace ===============================


/tmp/.mount_nosqlbXEaxPn/app/libnode.so(+0xb8df2e) [0x7fad597f8f2e]

/tmp/.mount_nosqlbXEaxPn/app/libnode.so(V8_Fatal+0xdf) [0x7fad597faf1f]

/tmp/.mount_nosqlbXEaxPn/app/libnode.so(+0x820973) [0x7fad5948b973]

/tmp/.mount_nosqlbXEaxPn/app/libnode.so(+0xac92ab) [0x7fad597342ab]

/tmp/.mount_nosqlbXEaxPn/app/libnode.so(+0xad2537) [0x7fad5973d537]

/tmp/.mount_nosqlbXEaxPn/app/libnode.so(+0xad1c41) [0x7fad5973cc41]

/tmp/.mount_nosqlbXEaxPn/app/libnode.so(+0xad1b46) [0x7fad5973cb46]

/tmp/.mount_nosqlbXEaxPn/app/nosqlbooster4mongo --type=renderer --no-sandbox --primordial-pipe-token=2865CAA1FA85DABFD455C4CD472EB233 --lang=en-US --app-path=/tmp/.mount_nosqlbXEaxPn/app/resources/app.asar --node-integration=true --webview-tag=true --no-sandbox --enable-pinch --num-raster-threads=4 --enable-main-frame-before-activation --content-image-texture-target=0,0,3553;0,1,3553;0,2,3553;0,3,3553;0,4,3553;0,5,3553;0,6,3553;0,7,3553;0,8,3553;0,9,3553;0,10,3553;0,11,3553;0,12,3553;0,13,3553;0,14,3553;0,15,3553;1,0,3553;1,1,3553;1,2,3553;1,3,3553;1,4,3553;1,5,3553;1,6,3553;1,7,3553;1,8,3553;1,9,3553;1,10,3553;1,11,3553;1,12,3553;1,13,3553;1,14,3553;1,15,3553;2,0,3553;2,1,3553;2,2,3553;2,3,3553;2,4,3553;2,5,3553;2,6,3553;2,7,3553;2,8,3553;2,9,3553;2,10,3553;2,11,3553;2,12,3553;2,13,3553;2,14,3553;2,15,3553;3,0,3553;3,1,3553;3,2,3553;3,3,3553;3,4,3553;3,5,3553;3,6,3553;3,7,3553;3,8,3553;3,9,3553;3,10,3553;3,11,3553;3,12,3553;3,13,3553;3,14,3553;3,15,3553;4,0,3553;4,1,3553;4,2,3553;4,3,3553;4,4,3553;4,5,3553;4,6,3553;4,7,3553;4,8,3553;4,9,3553;4,10,3553;4,11,3553;4,12,3553;4,13,3553;4,14,3553;4,15,3553 --disable-accelerated-video-decode --disable-webrtc-hw-vp8-encoding --disable-gpu-compositing --service-request-channel-token=2865CAA1FA85DABFD455C4CD472EB233 --renderer-client-id=4 --shared-files=v8_natives_data:100,v8_snapshot_data:101() [0x3686468]

/tmp/.mount_nosqlbXEaxPn/app/nosqlbooster4mongo --type=renderer --no-sandbox --primordial-pipe-token=2865CAA1FA85DABFD455C4CD472EB233 --lang=en-US --app-path=/tmp/.mount_nosqlbXEaxPn/app/resources/app.asar --node-integration=true --webview-tag=true --no-sandbox --enable-pinch --num-raster-threads=4 --enable-main-frame-before-activation --content-image-texture-target=0,0,3553;0,1,3553;0,2,3553;0,3,3553;0,4,3553;0,5,3553;0,6,3553;0,7,3553;0,8,3553;0,9,3553;0,10,3553;0,11,3553;0,12,3553;0,13,3553;0,14,3553;0,15,3553;1,0,3553;1,1,3553;1,2,3553;1,3,3553;1,4,3553;1,5,3553;1,6,3553;1,7,3553;1,8,3553;1,9,3553;1,10,3553;1,11,3553;1,12,3553;1,13,3553;1,14,3553;1,15,3553;2,0,3553;2,1,3553;2,2,3553;2,3,3553;2,4,3553;2,5,3553;2,6,3553;2,7,3553;2,8,3553;2,9,3553;2,10,3553;2,11,3553;2,12,3553;2,13,3553;2,14,3553;2,15,3553;3,0,3553;3,1,3553;3,2,3553;3,3,3553;3,4,3553;3,5,3553;3,6,3553;3,7,3553;3,8,3553;3,9,3553;3,10,3553;3,11,3553;3,12,3553;3,13,3553;3,14,3553;3,15,3553;4,0,3553;4,1,3553;4,2,3553;4,3,3553;4,4,3553;4,5,3553;4,6,3553;4,7,3553;4,8,3553;4,9,3553;4,10,3553;4,11,3553;4,12,3553;4,13,3553;4,14,3553;4,15,3553 --disable-accelerated-video-decode --disable-webrtc-hw-vp8-encoding --disable-gpu-compositing --service-request-channel-token=2865CAA1FA85DABFD455C4CD472EB233 --renderer-client-id=4 --shared-files=v8_natives_data:100,v8_snapshot_data:101() [0x368dc23]

/lib/x86_64-linux-gnu/libpthread.so.0(+0x76ba) [0x7fad58a556ba]

/lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7fad51df641d]

photo
1

Would the result of: db.getCollection('mycollection').stats() help?

photo
1

Thanks, Please post the result of " db.getCollection('mycollection').stats()" here. And, Can you tell me how many fields are in your table?

And, You can hold SHIFT to bypass auto-exec script to avoid UI freeze when you open a collection.

At present, we are now in vocation of Chinese Spring Festival. We may not response to this issue in time.

We will be back office on Feb.22, 2018 and follow this issue.

photo
1

I wish you a nice vacation.

Sorry, I am new to mongo - how do I count the fields?

The results of the stats() is:

{

"ns" : "db.myCollection",

"size" : 22536880,

"count" : 10,

"avgObjSize" : 2253688,

"storageSize" : 4894720,

"capped" : false,

"wiredTiger" : {

"metadata" : {

"formatVersion" : 1

},

"creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_max=15,merge_min=0),memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u",

"type" : "file",

"uri" : "statistics:table:collection-36-7534013434134222809",

"LSM" : {

"bloom filter false positives" : 0,

"bloom filter hits" : 0,

"bloom filter misses" : 0,

"bloom filter pages evicted from cache" : 0,

"bloom filter pages read into cache" : 0,

"bloom filters in the LSM tree" : 0,

"chunks in the LSM tree" : 0,

"highest merge generation in the LSM tree" : 0,

"queries that could have benefited from a Bloom filter that did not exist" : 0,

"sleep for LSM checkpoint throttle" : 0,

"sleep for LSM merge throttle" : 0,

"total size of bloom filters" : 0

},

"block-manager" : {

"allocations requiring file extension" : 17,

"blocks allocated" : 24,

"blocks freed" : 3,

"checkpoint size" : 4423680,

"file allocation unit size" : 4096,

"file bytes available for reuse" : 454656,

"file magic number" : 120897,

"file major version number" : 1,

"file size in bytes" : 4894720,

"minor version number" : 0

},

"btree" : {

"btree checkpoint generation" : 10100,

"column-store fixed-size leaf pages" : 0,

"column-store internal pages" : 0,

"column-store variable-size RLE encoded values" : 0,

"column-store variable-size deleted values" : 0,

"column-store variable-size leaf pages" : 0,

"fixed-record size" : 0,

"maximum internal page key size" : 368,

"maximum internal page size" : 4096,

"maximum leaf page key size" : 2867,

"maximum leaf page size" : 32768,

"maximum leaf page value size" : 67108864,

"maximum tree depth" : 3,

"number of key/value pairs" : 0,

"overflow pages" : 0,

"pages rewritten by compaction" : 0,

"row-store internal pages" : 0,

"row-store leaf pages" : 0

},

"cache" : {

"bytes currently in the cache" : 24342704,

"bytes read into cache" : 0,

"bytes written from cache" : 29298998,

"checkpoint blocked page eviction" : 0,

"data source pages selected for eviction unable to be evicted" : 0,

"eviction walk passes of a file" : 0,

"eviction walk target pages histogram - 0-9" : 0,

"eviction walk target pages histogram - 10-31" : 0,

"eviction walk target pages histogram - 128 and higher" : 0,

"eviction walk target pages histogram - 32-63" : 0,

"eviction walk target pages histogram - 64-128" : 0,

"eviction walks abandoned" : 0,

"eviction walks gave up because they restarted their walk twice" : 0,

"eviction walks gave up because they saw too many pages and found no candidates" : 0,

"eviction walks gave up because they saw too many pages and found too few candidates" : 0,

"eviction walks reached end of tree" : 0,

"eviction walks started from root of tree" : 0,

"eviction walks started from saved location in tree" : 0,

"hazard pointer blocked page eviction" : 0,

"in-memory page passed criteria to be split" : 4,

"in-memory page splits" : 2,

"internal pages evicted" : 0,

"internal pages split during eviction" : 0,

"leaf pages split during eviction" : 0,

"modified pages evicted" : 0,

"overflow pages read into cache" : 0,

"page split during eviction deepened the tree" : 0,

"page written requiring lookaside records" : 0,

"pages read into cache" : 0,

"pages read into cache requiring lookaside entries" : 0,

"pages requested from the cache" : 201,

"pages seen by eviction walk" : 0,

"pages written from cache" : 17,

"pages written requiring in-memory restoration" : 0,

"tracked dirty bytes in the cache" : 0,

"unmodified pages evicted" : 0

},

"cache_walk" : {

"Average difference between current eviction generation when the page was last considered" : 0,

"Average on-disk page image size seen" : 0,

"Average time in cache for pages that have been visited by the eviction server" : 0,

"Average time in cache for pages that have not been visited by the eviction server" : 0,

"Clean pages currently in cache" : 0,

"Current eviction generation" : 0,

"Dirty pages currently in cache" : 0,

"Entries in the root page" : 0,

"Internal pages currently in cache" : 0,

"Leaf pages currently in cache" : 0,

"Maximum difference between current eviction generation when the page was last considered" : 0,

"Maximum page size seen" : 0,

"Minimum on-disk page image size seen" : 0,

"Number of pages never visited by eviction server" : 0,

"On-disk page image sizes smaller than a single allocation unit" : 0,

"Pages created in memory and never written" : 0,

"Pages currently queued for eviction" : 0,

"Pages that could not be queued for eviction" : 0,

"Refs skipped during cache traversal" : 0,

"Size of the root page" : 0,

"Total number of pages currently in cache" : 0

},

"compression" : {

"compressed pages read" : 0,

"compressed pages written" : 13,

"page written failed to compress" : 0,

"page written was too small to compress" : 4,

"raw compression call failed, additional data available" : 0,

"raw compression call failed, no additional data available" : 0,

"raw compression call succeeded" : 0

},

"cursor" : {

"bulk-loaded cursor-insert calls" : 0,

"create calls" : 2,

"cursor-insert key and value bytes inserted" : 22536890,

"cursor-remove key bytes removed" : 0,

"cursor-update value bytes updated" : 0,

"insert calls" : 10,

"modify calls" : 0,

"next calls" : 161,

"prev calls" : 1,

"remove calls" : 0,

"reserve calls" : 0,

"reset calls" : 182,

"restarted searches" : 0,

"search calls" : 0,

"search near calls" : 143,

"truncate calls" : 0,

"update calls" : 0

},

"reconciliation" : {

"dictionary matches" : 0,

"fast-path pages deleted" : 0,

"internal page key bytes discarded using suffix compression" : 12,

"internal page multi-block writes" : 0,

"internal-page overflow keys" : 0,

"leaf page key bytes discarded using prefix compression" : 0,

"leaf page multi-block writes" : 5,

"leaf-page overflow keys" : 0,

"maximum blocks required for a page" : 1,

"overflow values written" : 0,

"page checksum matches" : 5,

"page reconciliation calls" : 10,

"page reconciliation calls for eviction" : 0,

"pages deleted" : 0

},

"session" : {

"object compaction" : 0,

"open cursor count" : 2

},

"transaction" : {

"update conflicts" : 0

}

},

"nindexes" : 1,

"totalIndexSize" : 36864,

"indexSizes" : {

"_id_" : 36864

},

"ok" : 1

}

photo
1

Use the following script to get the number.

db.myCollection.aggregate([
{$project: {arrayofkeyvalue: {$objectToArray: "$$ROOT"}}},
{$unwind:"$arrayofkeyvalue"},
{$group:{_id:null, allkeys:{$addToSet:"$arrayofkeyvalue.k"}}},
{$unwind:"$allkeys"},
{$count: "allkeys" }])

photo
1

This leads to:

{

"allkeys" : 3

}

photo
1

It seems that there is a lot of embedded doc fields. If possible, could you please send a JSON Sample data?

Fixing a problem usually starts with reproducing it. If I can’t reproduce it, then I am only guessing at what’s wrong, and that means I am only guessing that my fix is going to work.

Thanks

photo
1

Do you have a email where I could send it to?

photo
1

Please send it to support@nosqlbooster.com

photo
1

I did.

photo
1

Resolved in V4.5.1

photo
Leave a Comment
 
Attach a file