Verified Commit 3c1ff76e authored by Aral Balkan's avatar Aral Balkan
Browse files

Fix the basic persistence test

parent c76e58dd
......@@ -703,16 +703,21 @@ const carsThatAreRegal = db.cars.where('tags').includes('regal').get()
- The time complexity of reads and writes are both O(1).
- Reads are fast (take fraction of a millisecond and are about an order of magnitude slower than direct memory reads).
- Writes are fast (in the order of a couple of milliseconds on tests on a dev machine).
- Initial table load time and full table write/compaction times are O(N) and increase linearly as your table size grows.
## Limits
## Suggested limits
- Break up your database into multiple tables whenever possible.
- Keep your table sizes under 100MB.
## Hard limits
- Your database size is limited by available memory.
- If your database size is larger than > 1GB, you should start your node process with a larger heap size than the default (~1.4GB). E.g., to set aside 8GB of heap space:
- If your database size is larger than > ~1.3GB, you should start your node process with a larger heap size than the default (~1.4GB). E.g., to set aside 8GB of heap space:
```
node --max-old-space-size=8192 why-is-my-database-so-large-i-hope-im-not-doing-anything-shady.js
```
## Memory Usage
The reason JSDB is fast is because it keeps the whole database in memory. Also, to provide a transparent persistence and query API, it maintains a parallel object structure of proxies. This means that the amount of memory used will be multiples of the size of your database on disk and exhibits O(N) memory complexity.
......@@ -723,12 +728,14 @@ For example, here’s just one sample from a development laptop using the simple
| Number of records | Table size on disk | Memory used | Initial load time | Full table write/compaction time |
| ----------------- | ------------------ | ----------- | ----------------- | -------------------------------- |
| 1,000 | 2.5MB | 15.8MB | 41.6ms | 2.7 seconds |
| 10,000 | 25MB | 121.4MB | 380.2ms | 26 seconds |
| 100,000 | 244MB | 1.2GB | 5.5 seconds | 4.6 minutes |
| 1,000 | 2.5MB | 15.8MB | 85ms | 45ms |
| 10,000 | 25MB | 121.4MB | 845ms | 400ms |
| 100,000 | 250MB | 1.2GB | 11 seconds | 4.9 seconds |
(The baseline app used about 14.6MB without any table in memory. The memory used column subtracts that from the total reported memory so as not to skew the smaller dataset results.)
Note: For tables > 500GB, compaction is turned off and a line-by-line streaming load strategy is implemented. If you foresee your tables being this large, you (a) are probably doing something nasty (and won’t mind me pointing it out if you’re not) and (b) should turn off compaction from the start for best performance. Keeping compaction off from the start will decrease initial table load times. Again, don’t use this to invade people’s privacy or profile them.
## Developing
Please open an issue before starting to work on pull requests.
......
......@@ -147,33 +147,23 @@ class JSTable extends EventEmitter {
Time.mark()
log(` 💾 ❨JSDB❩ Loading table ${this.tableName}…`)
// Empirically, I’ve found that the performance of require() and
// the synchronous line-by-line read and eval we’re using are
// about equivalent at around a table size of 63MB on disk.
// (I’ve only tested with a single record size of ~2KB using
// the Faker module’s createCard() method so this may vary for
// other database structures.) Below this limit, require() is
// increasingly faster as you approach zero and the synchronous
// line-by-line read and eval is increasingly faster from
// there on (at around the 200MB mark, about twice as fast).
// Also, note that after the 1GB string size limit the latter
// method is the only viable one.
const REQUIRE_PERFORMANCE_ADVANTAGE_SIZE_LIMIT = 64_512_000 // ~63MB.
const LOAD_STRATEGY_CHANGE_LIMIT = 500_000_000 // bytes.
const tableSize = fs.statSync(this.tablePath).size
// TODO: Need to adapt this to the new method of compaction for smaller datasets.
if (true/*tableSize < REQUIRE_PERFORMANCE_ADVANTAGE_SIZE_LIMIT && !this.#options.alwaysUseLineByLineLoads*/) {
if (true) { // tableSize < LOAD_STRATEGY_CHANGE_LIMIT && !this.#options.alwaysUseLineByLineLoads) {
//
// Faster to load as a module using require().
// Regular load, use require().
//
log(` 💾 ❨JSDB❩ ╰─ Loading table synchronously.`)
this.#data = require(path.resolve(this.tablePath))
} else {
//
// Faster to load line-by-line and eval.
// Large table load strategy.
//
log(` 💾 ❨JSDB❩ ╰─ Streaming table load for large table.`)
log(` 💾 ❨JSDB❩ ╰─ Streaming table load for large table (> 500MB).`)
this.#options.compactOnLoad = false
log(` 💾 ❨JSDB❩ ╰─ Note: compaction is disabled for large tables (> 500MB).`)
const lines = readlineSync(this.tablePath)
//
......
......@@ -120,7 +120,7 @@ test('basic persistence', t => {
//
// Update two properties within the same stack frame.
//
expectedWriteCount = 2
expectedWriteCount = 3
db.people[0].age = 43
db.people[1].age = 33
......@@ -130,11 +130,11 @@ test('basic persistence', t => {
// Second time the listener is called:
//
if (actualWriteCount === 2) {
t.strictEquals(expectedWriteCount, actualWriteCount, 'write 2: expected number of writes has taken place')
t.strictEquals(JSON.stringify(db.people), JSON.stringify(people), 'write 2: original object and data in table are same after property update')
if (actualWriteCount === 3) {
t.strictEquals(expectedWriteCount, actualWriteCount, 'write 3: expected number of writes has taken place')
t.strictEquals(JSON.stringify(db.people), JSON.stringify(people), 'write 3: original object and data in table are same after property update')
const updatedTable = loadTable('db', 'people')
t.strictEquals(JSON.stringify(updatedTable), JSON.stringify(db.people), 'write 2: persisted table matches in-memory table after property update')
t.strictEquals(JSON.stringify(updatedTable), JSON.stringify(db.people), 'write 3: persisted table matches in-memory table after property update')
db.people.removeListener('persist', tableListener)
......
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment