I am working on integrating SQLite and Rust. This is a rough next step after my previous notes on reading the CSV file using the csv crate.
I started a new project, but something that they don't make extremely clear/obvious is unless you use
bundled, you should ensure you have sqlite already installed in your system. In particular they support 3.14.0 or newer.
I followed the instructions on the GitHub mirror repository. On Windows, rusqlite recommends using the
bundled feature which is probably what I would do honestly. It looks like it might be possible to
sudo apt install the package. I went to the Download page and downloaded the one with source code with autoconf. The GitHub instructions say I'll need to run a configure script, and that set says it has a configure script specifically so I want to trust that one first. Okay, well that didn't work (failed to compile with
/usr/bin/ld: cannot find -lsqlite3: No such file or directory so I just went back to bundled.)
I for the life of me cannot get the path to the CSV file to work. I even put in the full path name. In other news, ha! It looks like spending time reading the csv crate was worth it! This implementation of csv Virtual Tables for SQLite uses that crate as it turns out! Since I can't get the filename path to work and there is no
data option where we can put in just text. I'll manually insert rows.
I ended up being pretty happy with the implementation to insert rows. Tested it locally on a few million rows and added some flamegraphs on this branch (flamegraphs are in the README). I fought the borrow-checker quite a bit towards the end there, but no biggie, I can dig a little deeper later.
This ended up being quite fun. I didn't do anything extremely fancy, mostly followed rusqlite's tutorial and added a little bit of spice by considering performance/flamegraphs. I think I'm ready to try to put the last two sessions together into something more substantial. This is always the risk when starting a new project -- step 1, step 2, draw the rest of the freaking owl. Unless there are small shareable snippets/learning experiences or additional experimentation, the next post will hopefully be a look at a real live application. Some day, if I ever do successfully get this project to a "finished" state, I want to be able to write a post on how I got over the hump on a long-term software project. Small addendum: while doing this session, I came across DuckDB which promises faster performance for OLAP workloads. This is just a friendly reminder to myself that I saw DuckDB, I acknowledge it, and I'll re-evaluate it at a later time.
Did you find this article valuable?
Support Yusuph Mkangara by becoming a sponsor. Any amount is appreciated!