What is the Destiny 2 Manifest?
Glad you asked, cause this is something I’ve wanted to talk about for a long time. It can be summed up very simply.
The Manifest file, that you might hear other Destiny Dev Community members speak about, is a SQLite Database Schema file that contains the need information to create a local database for pretty much every static asset related to Destiny the Game. This file is used by all currently running Destiny apps or websites. In case your wondering - Yes, there is still one available for Destiny 1.
What is it used for?
Like I stated above - it stores all the static assets in Destiny. That means, weapons, armor, modifiers, locations, races, you name it. It’s there. Every mission. Every perk. Right there in a easily assessable format. There is no character specific information stored within the manifest. Things such as your characters or what weapons they currently have equipped are only available through live HTTP endpoints, but that’s a topic for another blog post. Today, I want to concentrate on the Manifest.
Why MongoDB is Such a Good Fit for the Destiny 2 Manifest
MongoDB is a NoSQL database, meaning that it’s a document storage service and it just so happens that the Manifest is basically a bunch of documents. So, it’s perfect! I’m really not sure why others don’t use Mongo or some other NOSQL solution, but… they don’t. More often than not, they download the Manifest file locally (in the browser) and store it in a local IndexDB instance or even Local Storage. Ever notice when visiting your favorite Destiny related app or site, you are typically greeted with a nice loading screen? Well, that’s what’s happening during this loading state.
To be fair, not all sites do this. Speaking personally, Ishtar-Collective does not do this. We use a different style of relational database on the backend and only download the Manifest after an update. Again though… that’s another blog post. (all these ideas for new posts!). And again, in being fair to other devs out there, there is no real harm in the above mentioned practice of using IndexDB. Heck, I’ve used it before on other Destiny related projects and it’s great.
What makes Mongo or any other document store a great fit for the Destiny Manifest is actually something that’s not really expected from NoSQL DB’s. It’s mainly the fact that at build time, relational data does not have to be parsed.
Wait WTF?
Here me out. I’m currently working on a redesign of a currently popular project and one of things I’m working towards is making it a completely JavaScript SSR (Server Side Rendered) site, but make it as dynamic as you would expect from any other site. Yes! Local storage will be an issue that I need to pay special attention to moving forward, but that’s ok. What I would like to do with this series is expand on how I expect to make good on this promise. Bear in mind that moving forward within this post means that you agree to see some super special ugly code that you have to accept as MVP. Ok? Cool? Sweet!
Loading the Manifest into MongoDB
First things first, get the Manifest. There are many tuts around the inter-webs that outline this (please, let me know if this is something I should cover…) so, we will start as though you have a Manifest file.
Environment
First things first, we need an environment to work in or around or whatever. I’m currently using Docker on Windows, so in the root of my project directory I have a simple MongoDB instance:
version: "3.1"
services:
mongo:
image: mongo:3.6
restart: always
ports:
- 27017:27017
volumes:
- dbdata:/data/db
volumes:
dbdata:
driver: local
My package.json looks like the following:
{
"name": "d2_manifest_to_mongo",
"version": "0.0.1",
"description": "This is a simple repo that holds some JS that downloads and extracts the D2 Manifest.",
"main": "index.js",
"scripts": {
"test": "N/A"
},
"author": "unsys12",
"license": "MIT",
"bugs": {
"url": "https://gitlab.com/unisys12/d2_manifest_migration/issues"
},
"homepage": "https://gitlab.com/unisys12/d2_manifest_migration#readme",
"devDependencies": {
"dotenv": "^7.0.0"
},
"dependencies": {
"axios": "^0.18.0",
"mongodb": "^3.1.13",
"sqlite3": "^4.0.6",
"the-traveler": "^1.0.0"
}
}
Note to Self - Currently using the npm package the-traveler
, which is a really nice package, to download and extract the Manifest.
And to store my environment vars, I create a .env
file that contains the following:
## Mongo Local Config
DB_URL=mongodb://localhost:27017/Local_Armory
DB_NAME=Local_Armory
From here, making sure that we run docker-compose up -build
the first time and docker-compose up
there after - We code!
What we want to be able to do is invoke a command and give it a path to our Manifest file, process it and exit out. Something along the lines of the following:
require("dotenv").config(); // Loads our env vars
const Bungie = require("./Bungie"); // import our Bungie Module
Bungie.processDB("./tmp/storage/manifest.content"); // calls our processDB method within our Bungie Module
To do that, create a dir titled Bungie
within the root of our current root directory and include an index.js
file containing the following:
const sqlite3 = require("sqlite3").verbose();
module.exports = {
processDB(file) {
const db = new sqlite3.Database(file);
db.all(
"select name from sqlite_master where type='table'",
(err, tables) => {
if (err) console.log(`Error fetching table names - ${err}`);
tables.forEach(t => {
console.log(t);
});
}
);
}
};
Let’s walk through this method and see what’s going on:
- we create a new SQLite Database instance using the const
sqlite3
. - we use the SQLite
all
method to query each of the tables included in the Manifest. - we then process each of the tables.
In the above example, you can see that I am storing my Manifest file in the root of the project, within a /tmp/storage/
dir that I created before hand. This directory can be anywhere, but I advise that be within the root of your project. It would also be a good practice to add this directory to your .gitignore
file as well.
The above method is all well and good. Even spits out a crap ton of stuff to the console, but is only part of the solution. Next, we need to process the contents of each table. Let’s do that!
const sqlite3 = require("sqlite3").verbose();
const mongo = require("../DB");
module.exports = {
processDB(file) {
const db = new sqlite3.Database(file);
db.all(
"select name from sqlite_master where type='table'",
(err, tables) => {
if (err) console.log(`Error fetching table names - ${err}`);
tables.forEach(t => {
this.processTables(t);
});
}
);
},
processTables(table) {
console.log(`Processing ${table.name}`);
let rows = new Promise((resolve, reject) => {
db.all(`select * from ${table.name}`, (err, rows) => {
if (err) return reject(err);
resolve(rows);
});
});
let data;
rows
.then(async x => {
let batch = [];
console.log(`Read ${x.length} rows from ${table.name}`);
for (let r = 0; r < x.length; r++) {
data = JSON.parse(x[r].json);
data.id = data.hash || data.statId || rows[r].id;
batch.push(data);
}
await mongo.insert(table.name, batch);
console.log(`Processed ${batch.length} rows for ${table.name}`);
console.log(" ");
})
.catch(e => console.log(`Error processing row -> ${e}`));
}
};
What the processTables(t)
method does is quite simple:
- reads the numbers of rows each table has
- iterates over each row of each table and assigns an id and data property to each row
- pushes that into an array for each table
- that array is then passed to a custom Mongo insert method
- the number of rows are reported to the console
Interesting Stuff Follows
Now is where things get interesting. Well, not really, but we do get into the land of MongoDB so… more interesting than not! Let’s take a look at that custom Insert cmd from above.
Within the root dir, create a new directory called “DB” and a file titled index.js
.
It will contain the following:
const mongo = require("mongodb");
module.exports = {
async insert(collectionName, batch) {
try {
let collection = await loadCollection(collectionName);
collection.insertMany(batch);
} catch (e) {
console.log(`Error performing replaceOne -> ${e}`);
}
}
};
async function loadCollection(collectionName) {
const client = await mongo.MongoClient.connect(process.env.DB_URL, {
useNewUrlParser: true
});
return client.db(process.env.DB_NAME).collection(collectionName);
}
Notice in the above method, insert
, we are actually calling a Mongo InsertMany method and passing the array to it? Yep! We are also auto creating each collection based on the table names as well. In the end, you end up with a MongoDB that looks a bit like the following:————-
You can now perform Mongo queries as you normally would. And that comes next week… So, stay tuned!