Building Project 0’s Backend
The Vision
Before Messari had mass layoffs, I was talking to Macbrennan about the vision of cross-margining and becoming the DeFi prime broker. This was a vision I could get behind. I wanted to do something new in DeFi, not something new to me, but something new in the space. It took some selling, but I was sold.
When I came in, I had to decide what to do: put out existing fires, build on top of the existing platform, or throw everything away, put my head down, and start from scratch. I chose to start from scratch. I wanted a new pipeline that was robust, scalable, traceable, allowed for fast iteration, and made DevX across teams simple.
P0 Core Stack
p0-core
, the monorepo for our backend team does the following:
- Infrastructure As Code - We use Pulumi, GKE, and json2k8s (a simple deployment framework that converts a JSON file to K8s manifests).
- Data Ingestion - Getting data from various sources and dumping it in our DB.
- Data Processing and Pipeline - Parsing and structuring the raw data.
- New API Model - Thinking of serving internal data needs in a new way.
- Data Insights - Getting insights from our data faster so we can make better decisions.
I’ll zoom into the most interesting parts of the stack.
Dr. Blocks and Switchboard
No one in the Solana ecosystem cuts down CUs like DoctorBlocks. We decided to invest in our relationship with Switchboard, and it’s been one of our most fruitful partnerships. Kobe and DoctorBlocks have led the charge to ensure smooth Oracle setups. Switchboard has made this partnership seamless and incredibly collaborative:
- Hands On Support - The team is very responsive and adaptive; they ship what people need.
- Cheap - Simulation and cranking is very cheap.
- Run Your Own Instance - Running your own crossbar instance has never been easier. P0 and Switchboard have partnered to make it easy to spin up your own stack. Soon, you’ll be able to periodically snapshot your feeds with minimal setup.
Oracles can make or break your product; we’ve had oracle issues in the past. We are looking to not only improve our Oracle setup but also make the process so smooth and efficient that no team in Solana ever has to worry about their Oracle infrastructure. Switchboard and P0 will continue to cook together.
Carbon x Data Ingestion
We use Carbon, a Solana native library that makes it easy to parse your program. It’s pretty simple:
- Point your IDL at it.
- It will generate all your Rust types and cover all processing overhead.
- Dump the data in your DB.
If Solana is going to succeed, getting on-chain data in your DB needs to be stupidly simple. Solana has more data than any other chain. If we can make the barrier to entry when it comes to getting your data lower, we can enable developers to focus on building.
This is why P0 has been a proud contributor to Carbon and has helped with various design decisions, tested new features, and made commits to the codebase. Kellian and the SevenLabs team have their work cut out for them, but I’m hopeful they can continue to bring this vision to life.
SQLMesh Magic
Getting data into the DB is one thing; cleaning, transforming, and making use of it can be challenging. DBT can feel heavy and bloated; SQLMesh makes data processing so much easier. Some of their benefits:
- Managed Runs - SQLMesh handles all scheduling in their backend, taking the load off data teams.
- Run Locally - You can run and validate your models locally and ensure everything works as expected.
- Tooling - Great tooling for smooth DevX.
SQLMesh is a great tool and one I want to see used more in the crypto space. It’s great for teams that need to get up and running fast, don’t want to spend time on bloated setups, and need something that just works.
Tobiko’s (creators of SQLmesh) recent acquisition by FiveTtan is a testament to what they’ve built. Ryan Eakman (co-creator of SQLmesh) is cracked, easy to talk to, and cares very much about the user experience of SQLmesh. We look forward to further collaboration with the Tobiko team.
SCD2 Unlock in Crypto
But the real unlock has been with their implementation of SCD Type 2 models. It’s particularly wonderful for state data. (This will get nerdy; feel free to jump to the next section).
Let’s assume we have the following records for a bank, where we take either periodic snapshots or get account updates streamed.
snapshot_time | Account | State |
---|---|---|
2025-08-05 12:00:00 | 0x123 | ABC |
2025-08-05 12:20:00 | 0x123 | ABD |
2025-08-05 12:25:00 | 0x123 | ABE |
If you want to query the state of this bank at 2025-08-05 12:17:43
you’re going to have to write a generally slow and inefficient query that finds all records up to that data point. This is hard to scale, difficult to join on, and leads to difficult and tricky query patterns.
Here is where SCD models open things up:
start_time | end_time | Account | State |
---|---|---|---|
2025-08-05 12:00:00 | 2025-08-05 12:20:00 | 0x123 | ABC |
2025-08-05 12:20:00 | 2025-08-05 12:25:00 | 0x123 | ABD |
2025-08-05 12:25:00 | 9999-12-31 00:00:00 | 0x123 | ABE |
- Ease of Use - SQLMesh’s implementation of SCD models makes it straightforward to implement this model.
- Start and End Time - This model opens up so many things. Starting with querying the state of a bank at a given time. This query is lightning fast, efficient, and easy to implement.
SELECT * FROM banks
WHERE account = '0x123'
AND '2025-08-05 12:17:43' >= start_time
AND '2025-08-05 12:17:43' < end_time
- Calculating Points - This model makes it easy to see how long a user held a position in a bank as well. Just take the delta between the start and end time and allocate points accordingly per second.
SCD2 is perfect for all state data models. Shoutout to the SQLMesh team for their implementation of it.
Supabase and a New API Model
We’re running our DB on Supabase. It’s got so many great features, especially if you’re not dealing with massive data sets. The data team has developed a new pattern with our app team that has worked absolute wonders.
application
schema - We expose all our public facing data sets in aapplication
namespace with strong RLS.- Use SQLMesh to create views - We leverage SQLMesh to create basic views in the
application
namespace. We also version all of our viewsv_bank_v100
,v_bank_110
. supabase-js
- Our application team then uses the incredibly powerful and type-safe Supabase SDK to query the views in theapplication
namespace safely, quickly, and with incredible iteration speed.
This setup came to life with great cross team collaboration. My couterparts on the App team, Bo and Adam iterated on the process until we got it down to a sweet science. Ultimately, this approach has had remarkable benefits:
- DevX - Data team ships a new view, and the app team writes 10 lines of code to get it.
- Better than an ORM - ORM’s can be tricky. Supabase’s API handles so much, it’s a much easier experience than connecting to a DB.
- Avoid Breaking Changes - We version all of our views, so if we want the team to migrate to a new view, we can create it, wait for the app team to migrate, then deprecate the old view.
Supabase has been one of my favorite tools to move fast and build production grade application.
Hex for AI-Driven Data Insights
Hex has been the single greatest BI tool I’ve ever used; it’s not even close. They’ve made it incredibly easy and simple for engineers and non-engineers to get useful data insights. Some of my favorite things about Hex:
- Query DB Directly with AI - AI model knows your schema, understands your data, and with magic you can easily ask it questions about your data sets.
- SQL to Python to SQL - Write SQL, then transform that DF in Python, then write SQL against that Python DF. Easy transition, a data engineer’s delight.
- Charting and Formatting - Dashboards need to look pretty (unfortunately for me). Hex makes this process incredibly easy (fortunately for me). They’ve got the cleanest UI and make design friendly dashboard easy.
- Dune x Hex - You can write Dune Queries (yes, that Dune) directly in Hex, then join that Dune output to your data, then turn that into a beautiful chart.
Hex is pricey, but they’re one of my favorite SaaS products out there. If you want faster iteration between business and engineering, there is no competitor.
Challenges
Building this stack wasn’t easy. One of the most difficult challenges is that state data is fleeting. The second is that this is a startup; you want to move fast without introducing massive tech debt. Our solution at P0 has been:
- Use the right tools for the right job. You might end up with a few more tools than you want, but everything does what it’s good at.
- Store All Raw State Data - When you get state data, just dump it to a DB and parse it later. Don’t let your parser throw away the state data; the last thing you want to do is find out you weren’t parsing all the important fields and there is no way to get the data back.
- Cross-Team Buy-In - Don’t build your backend in isolation; talk to all other engineering and non-engineering partners early and often. Learn what their hesitation and desires are. As engineers, our job is to serve. Serve your fellow engineers, serve your business team, and serve your customers. This is our North Star; don’t deviate.
What’s Next
We’ve got many interesting things in the works at P0. Our next major step is building a dynamic risk engine. We want to get our users the best rates, strategies, and UX. If you are a sophisticated trader, yield farmer, or anyone in-between, we are going to do whatever we can to serve you.
If you are an engineer and any of the above was interesting, feel free to reach out on X or Telegram. Solana DeFi wins if we make the pie bigger, and it’s our job as builders to make sure the oven, mixer, and everything in between is functioning.
Disclaimer: No AI was used to review or write this article. All opinions are my own; all mentions of products and integrations are genuine. There were no sponsors of any kind to write this article. All tools and tech earned their way into the article on merit alone.