pgBackRest Is Back, But Open Source Has a Stewardship Problem It Can't Keep Ignoring
A look into what nearly broke when Postgres' most trusted backup tool went dark for a week, and why the rescue is its own kind of fragile.

They should treat what happened as a reminder. Backup and recovery tooling are not optional infrastructure, and continuity is extremely important.
For about a week in late April, the most widely deployed backup tool in production Postgres was officially dead. On April 27, pgBackRest's sole maintainer David Steele posted a notice of obsolescence on the project's homepage. Thirteen years of work, no sustained funding, hard stop. The full message asked anyone who forked the codebase to pick a new name.
What followed inside seven days is the part that should make every senior engineer running Postgres in production a little uncomfortable. Steele said his inbox blew up. A coalition of sponsors formed. PGX, a PostgreSQL consulting firm, shipped a continuity fork called pgxbackup within four days, with Steele's blessing on the name change. By the time the maintenance update posted, the project was being described as actively reviving, with a plan to bring on a second maintainer and distribute the workload across multiple corporate funders rather than one.
It's a remarkable recovery. It is also, depending on how you read it, the clearest illustration in years of a problem that the Postgres ecosystem has been able to ignore for a long time. A project can look completely healthy on GitHub, steady commits, recent releases, active issues, and still be one acquisition or one tired maintainer away from going dark. The license is permissive. The code is open. The stewardship around it is not actually anyone's job until it becomes everyone's emergency.
The hyperscaler math nobody runs until they have to
Adam Brusselback ran pgBackRest in production for years at GoSimple, the bootstrapped SaaS company he founded and ran until its acquisition by Blacksmith Applications (and later acquired by TELUS). When Steele's notice posted, Brusselback wrote a LinkedIn post arguing that the code-churn and testing cost of switching backup solutions across large companies would exceed the cost of funding the maintainer. He called the situation "a failure of our economic system." He has been through a forced backup-tool migration before, from WAL-E to pgBackRest, when WAL-E was effectively superseded by WAL-G and he had to revisit his recovery strategy.
"Even for a small shop, it's easily a week," Brusselback told The Read Replica. Process documentation, infrastructure-as-code changes, SOC documentation if you have any, all of it adds up. Move up the stack to a hyperscaler running thousands of Postgres instances and the curve gets nasty. "Get to somewhere like a hyperscaler where you have thousands of instances, and you're going to have probably triple, quadruple the amount of work."
The reason that math matters is that the alternatives are not drop-in. Barman, the most credible competitor on feature parity, sits architecturally on top of pg_basebackup, which is a fundamentally different design choice. pg_basebackup itself is excellent for cloning a running cluster directory but, per the PostgreSQL documentation, it has no backup catalog, no retention management, no restore command. pg_dump is an export tool. Telling a team running petabytes of incremental backups in pgBackRest's repository format to switch to Barman is a rewrite of their disaster recovery posture, their automation, their SOC 2 evidence trail, and in some cases their contractual RTO and RPO commitments.
The cost of a name-only fork, by contrast, is close to zero engineering hours. PGX shipped pgxbackup as a deliberate continuity fork, open under the same license, preserving the configuration language and repository format. "If somebody picks up maintenance of pgBackRest and continues to support it under a different name, that's obviously going to take way less work," Brusselback said. "It's just a trust at that point, whether you trust that entity that's taken control of it."
Brusselback is also the kind of operator who can point to a specific, technical reason this tool is hard to replace. When pgBackRest moved from file-level incremental backups to block-level, his nightly traffic dropped from around 100 GB to under 10 GB, with a corresponding cut in recovery time. That is not a feature you reproduce by writing a wrapper around pg_basebackup. It is years of low-prestige work on the unglamorous parts of file integrity, manifest design, and WAL handling. "Backups are not a prestigious thing," he said. "It's just something that needs to be done, and needs to be done right."
The healthy ecosystem test
Another practitioner watching this play out in real time was René Cannaò, the CEO and Founder of ProxySQL, the connection-management proxy that started in the MySQL world and now also speaks Postgres. ProxySQL is itself open source, sells commercial support, and lives downstream of the same dynamic that almost broke pgBackRest.
"Users should not go into full panic mode," Cannaò told us. "But they should treat what happened as a reminder. Backup and recovery tooling are not optional infrastructure, and continuity is extremely important. It's not just a matter of what features you're using for backup, but also making sure that whatever tools you are using is well maintained."
The framing matters because it identifies the actual evaluation gap. The question to ask about a critical OSS dependency is not whether commits are flowing or whether issues are answered. It is whether the project has an identified owner, a funding pathway, and a continuity plan that survives a single acquisition or a single human walking away. By that standard, pgBackRest looked healthy on GitHub right up until it didn't.
"They need to ensure that there is a maintainer and that the project has continuity in the future," Cannaò said. The owner can be internal headcount, a vendor, a consultant, or a governance body, but it has to be someone. "Open source does not remove operational responsibility. It just makes the responsibility shared," he said. In closed-source, the vendor owns operations. In open source, the user community implicitly inherits it, whether or not anyone signed up.
This is where the Postgres ecosystem has been getting away with something for a long time. PostgreSQL itself has a governance model built across decades, with named committees, a release roadmap, and a deliberate succession structure. Most of the satellite tooling, including the ones running in production at hyperscalers, does not. pgBackRest's "bus factor" was widely understood to be one, and the project still received the production trust of a project with a foundation behind it.
Of course none of this would have ended well without the people who actually stepped up. Steele carried pgBackRest for thirteen years, most of which was on nights and weekends, he said. PGX shipped a continuity fork in four days, with Steele's blessing on the name change, so users would have somewhere to land. That is open source working the way it is supposed to.
Why the coalition formed (and why that's the most interesting part)
The companies running pgBackRest in production looked at the migration math, ran the same numbers Brusselback walked through, and concluded that a coalition contribution to fund continued maintenance was an order of magnitude cheaper than even one large operator's migration. They were not paying for charity, but rather a deferred cost that had finally come due. The community reaction on Hacker News was initially alarm, followed quickly by a triage of which large operators would have to step up first.
But the companies that depend most heavily on pgBackRest are the ones whose customers feel the pain of a botched restore most acutely. Hyperscalers and managed Postgres providers among them. So the coalition that formed was structurally similar to a small group of competitors collectively funding a piece of shared infrastructure none of them wants to own outright and none of them wants their rivals to free-ride on. "Just pay the person to continue maintaining it," Brusselback put it, walking through the same math from the operator side. "Any one of these big companies having to do an all-hands migration to a brand new tool, you're going to spend way more than one person's salary when you have tens of thousands of instances and petabytes of backups in a single format."
It is also why the rescue is its own kind of fragile. The coalition exists because the migration cost is currently higher than the maintenance cost. That ratio depends on the project staying technically competitive, on no single sponsor reproducing pgBackRest internally and walking away, and on enough firms continuing to act against short-term competitive incentives to keep the lights on.
The current sponsors line just reads "Supabase," for now. Steele's update suggests more sponsors are imminent and another maintainer is in the works. Either way, the maintenance update calls for patience, and it does not yet describe the governance structure that would make this episode not happen again.
The Snowflake-Crunchy backdrop, and the AI displacement everyone notices
The thing that pushed pgBackRest off the cliff was not a community falling-out, but a corporate transaction. Steele was sponsored for most of pgBackRest's life by Crunchy Data, where he was employed. Then Snowflake acquired Crunchy Data in June 2025 for a reported $250 million, folding the Postgres specialist into the AI Data Cloud strategy. Steele's announcement partially documents what happens next. The new owner has different priorities, the funding ends, and the maintainer spends months trying and failing to find another sponsor.
"Crunchy Data got acquired by Snowflake, and they decided not to continue," Brusselback said. "There's a consolidation of talent into some of the big players. And those players are going to do what they want to do for their internal goals." That is a pattern not specific to Postgres. The same year that Crunchy Data was acquired, Databricks bought Neon and Oracle made significant cuts to the MySQL team. The pattern is consolidation up and discretionary funding down.
Both Brusselback and Cannaò converge on the same secondary cause, which is the AI capital cycle. "AI dollar signs are in too many people's eyes, and it's clouding some judgment on basic stuff that needs to get taken care of," Brusselback said. Cannaò is slightly more measured about it. "AI is absolutely reshaping how development works, and not just development, but also support." He also noted that the picture is muddier than a single villain. "It is also possible that there is some economic downturn and the use of AI is just accelerating the projection." Layoffs that would have been absorbed in a stronger economy are getting locked in because AI tooling makes the smaller headcount viable.
The reality is that maintenance funding for the unglamorous parts of the data stack is competing for budget against GPU procurement, agentic SRE products, and AI-native database launches. The market is not rewarding continuity work. It is rewarding net-new bets. That is fine when the satellite tooling is robust enough to coast, but dangerous when the satellite tooling is exactly the thing keeping someone's last good restore intact.
What the next maintainer crisis looks like
The pgBackRest near-death was the version of this story with the kindest possible ending. The codebase is solid. The maintainer is well-known. The community response was fast. A continuity fork shipped inside four days from a credible commercial firm. A coalition formed. But the next critical-but-unprestigious Postgres dependency that loses its sponsor could be messier.
Pick any candidate from the satellite ecosystem. The connection poolers. The replication tooling. The monitoring exporters. The extension maintainers carrying production load with two contributors. The PostgreSQL project itself has matured into something durable. Its satellites have not. And the satellites are increasingly load-bearing, especially as agentic workloads, vector workloads, and analytical-on-OLTP convergence push more critical paths through the same Postgres core.
Cannaò's prescription is the right one and also the unfinished one. "It is important that those projects are not simply open source, but they also have some governance." Funding pathways that survive a single corporate transaction. None of that exists for pgBackRest yet. It exists, so far, as a maintenance update and a single-logo sponsors line.
The Postgres community got the good outcome this time. The structural question the episode opened up, the one about whether the ecosystem is willing to fund stewardship at the rate it consumes it, has not been answered. It has just been deferred.






