0:00
/
0:00
Transcript

Palantalk | E32 - Our Tech and Tools

A recording from Rachel @ This Woman Votes and Banner & Backbone Media's live video

Thank you Marcus Flowers, NeuroDivergent Hodgepodge, Rachel Maron, Courtney M 🇨🇦, Martin D. Vasquez, and many others for tuning into my live video with Nick Paro, Shane Yirak, and Banner & Backbone Media! Join me for my next live video in the app.

Get more from Rachel @ This Woman Votes in the Substack app
Available for iOS and Android

Share

When Movement, Systems, and AI Converge

Had another interesting conversation with Nick and Shane, not just about travel, policy, or politics, but about how AI-mediated systems are beginning to shape who moves easily, who gets flagged, and who gets filtered out. 2 hours just flew by!

We open with airports, but I think the real subject is decision-making infrastructure. What used to be a human checkpoint is now a layered system of data, models, and automated risk signals. The question underneath the early chit-chat is simple: who is making the decision, and on what basis?

When that basis is opaque or unexplainable, the experience changes. People do not just comply. They anticipate.

Topical Summary Banner and Backbone | March 23, 2026

SAVE Act and Voter Disenfranchisement

The episode opened with my data analysis on the SAVE Act. Passport ownership rates differ predictably between Democratic and Republican women — partly because of access and partly because of documented patterns of male household control over documentation in conservative communities. The practical result is that the legislation will disenfranchise a larger share of MAGA women than the designers appear to have modeled for. The conversation treated this as a structural design failure, not a political irony: when you legislate without modeling consequences, you hit your own coalition first. The ICE-at-airports thread ran parallel, control of movement without requiring new law, and the real-world impact on families navigating travel with trans members.

Military AI and Institutional Capture

The sharpest analytical segment. Palantir was officially designated “critical” to Pentagon operations this week. Shane’s assessment: institutional capture, complete. The Pentagon has dismissed Claude (Anthropic drew a line, reasons debated) and is bringing in Grok. Shane documented a structured conversation in which he walked Grok through its own logic until it produced a table explaining why it should not be integrated into the kill chain, and drafted a message to Musk saying so. Grok admitted, through its own reasoning, that it is not a reliable OSINT source and carries a built-in owner hedge. That is the model now sitting adjacent to lethal targeting.

Project Maven came up as the specific mechanism to watch. Maven writes its own strike justifications. It is not an advisory tool, it is operating in a rubber-stamp position. This became concrete when the discussion turned to the Pentagon’s reduction of civilian casualty prevention staff from 2,000 to roughly 20, while conducting over 1,000 strikes in a single day. The structural question is not whether humans are morally responsible. They are. The structural question is whether humans remain in the loop at the moment of decision. The answer is increasingly no.

I am also working through Alex Karp’s 2003 German doctoral dissertation — 129+ pages, not available in English translation. Fourteen days in, through chapter three. The argument is that Palantir’s current operational theory maps directly to Karp’s formative academic writing. Anyone with strong fluent German and tolerance for dense academic text should reach out.

The Owl Problem and AI Interpretability

I introduced the “owl problem,” a documented experiment in which an AI trained to be obsessed with owls transmitted that obsession to a virgin model via a random number set. No one knows why or how. The numbers carried the bias. The discussion then moved to AI systems now communicating through mathematical vectors that human researchers cannot decode. Shane’s framing: if you can’t read what the model is telling itself, you have already lost the interpretability you needed before you built the kill chain. AGI skepticism followed, the argument being that we do not understand how human intelligence forms, so we cannot reliably replicate or contain it.

Sovereign Tech Stack and Local AI

I talked about building a triple-stack local AI: one model ingests data, one performs analytics, one synthesizes. All three are small enough (7B–14B models locally at high speed, with privacy, customization, and unlimited usage) to run on consumer hardware — currently testing on a ten-year-old $300 Walmart laptop. Smaller models show less hallucination, but less resilience to garbage input, and zero dependency on commercial cloud providers. Nothing goes public until it passes the Sovereign Machine Audit. DeepSeek and other Chinese models are in the experimental pool; evaluation is ongoing.

Broadbanner Pages — Tech Demo

Nick demonstrated the Broadbanner Assistant, a locally-runnable content pipeline. It watches a designated Google Drive folder, pulls new documents, converts them to properly formatted Markdown, and files an automated pull request to a GitHub Pages repository. Human review and merge is the only manual step. Deployment of a new episode review that previously took 10–15 minutes now takes two clicks. The tool is open source, requires no cloud dependency, and functions as a deplatforming hedge — content stored on GitHub survives a Substack ban.

The strategic layer matters here: GitHub Pages content is indexed by the sources AI models pull from for training. Starring, sharing, and linking to that content feeds it into model learning. The explicit goal is counter-saturation; building infrastructure to put rigorous, sourced, progressive analysis into the same data streams that currently skew heavily in one direction. The community brain framing: My analytical frameworks, Nick’s compiled research database, Shane’s ongoing analysis, combined into a shared, query-able knowledge base with multi-model cross-checking built in.

A prompting discipline show is in planning, Nick and I both flagged AI prompting as a distinct skill that most users are not practicing carefully. The gap between interrogated and uninterrogated AI output is large.

Post-Crisis Democratic Infrastructure

The philosophical anchor of the episode: the United States is constitutionally allergic to repair. That is the argument, stated plainly. The structural holdovers of white supremacy are not accidents, they are features of a system that has never created an institutional pathway for genuine reconciliation. Germany’s cultural resistance to fascism was named as the comparison case, with the caveat that Germany’s resistance is a people’s achievement, not an institutional one. Karp was educated there. The institutions were not sufficient.

The specific argument: we need elected officials who are not personally invested in AI companies and who will engage with actual technical experts on what data pruning, algorithmic correction, and model auditing required. That pool of candidates does not yet exist at sufficient scale. Building it is part of the work.

Logistics

No Kings rally is approaching this weekend. A rally prep safety show is planned. PalanTalk continues Mondays at 1pm ET / 10am PT. Sick of the Shit Publications has FAA-compliant 32oz metal water bottles available at shop.sickoftheshitpublications.com.

Discussion about this video

User's avatar

Ready for more?