<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>Paper Plaine</title>
    <link>https://paperplaine.com</link>
    <description>Fresh research from arXiv explained in plain English, updated twice daily.</description>
    <language>en-us</language>
    <lastBuildDate>Sun, 10 May 2026 08:53:42 GMT</lastBuildDate>
    <atom:link href="https://paperplaine.com/feed.xml" rel="self" type="application/rss+xml"/>
    <item>
      <title>UniPool: A Globally Shared Expert Pool for Mixture-of-Experts</title>
      <link>https://paperplaine.com/papers/unipool-a-globally-shared-expert-pool-for-mixture-of-experts</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/unipool-a-globally-shared-expert-pool-for-mixture-of-experts</guid>
      <pubDate>Thu, 07 May 2026 17:59:44 GMT</pubDate>
      <author>Minbin Huang, Han Shi, Chuanyang Zheng et al.</author>
      <category>AI</category>
      <description>Sharing expert capacity across layers instead of duplicating it per layer</description>
      <content:encoded><![CDATA[<p><em>Sharing expert capacity across layers instead of duplicating it per layer</em></p><p>A new design for mixture-of-experts neural networks treats expert capacity as a shared resource rather than giving each layer its own separate experts. Across five model sizes, this approach reduces validation loss by up to 3.86% and matches the performance of traditional designs while using only 42–67% as many expert parameters, suggesting that experts don't need to multiply linearly as models get deeper.</p><p><strong>Why it matters:</strong> Current large language models waste capacity by requiring each layer to have its own set of experts, forcing model size to balloon as networks grow deeper. This work shows you can build more efficient models by pooling experts globally, which directly reduces the computational and memory cost of training and running massive AI systems.</p>]]></content:encoded>
    </item>
    <item>
      <title>Mathematical Modeling of Early Embryonic Cell Cycles of Drosophila melanogaster</title>
      <link>https://paperplaine.com/papers/mathematical-modeling-of-early-embryonic-cell-cycles-of-drosophila-melanogaster</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/mathematical-modeling-of-early-embryonic-cell-cycles-of-drosophila-melanogaster</guid>
      <pubDate>Thu, 07 May 2026 17:22:18 GMT</pubDate>
      <author>Meskerem Abebaw Mebratie, Benedikt Drebes, Katja Kapp et al.</author>
      <category>Biology</category>
      <description>How fruit fly embryos speed up and slow down their cell division</description>
      <content:encoded><![CDATA[<p><em>How fruit fly embryos speed up and slow down their cell division</em></p><p>Fruit fly embryos divide cells in a rapid, synchronized rhythm during early development, and scientists built a mathematical model that explains how. The model shows that one key protein—called CycB—acts like a molecular clock: by gradually changing how quickly it's made, the embryo naturally stretches out its cell cycle timing over the first 14 divisions, matching what happens in real embryos.</p><p><strong>Why it matters:</strong> Understanding how embryonic cell cycles are controlled could reveal what goes wrong in birth defects or cancer, where timing and coordination break down. Since fruit flies share many of the same molecular machines that control human cell division, insights from this model offer a bridge between simple mathematical rules and the complex biology of early development.</p>]]></content:encoded>
    </item>
    <item>
      <title>Per-Market Information Leakage and Order-Flow Skill: Two Methodological Lenses on Informed Trading in Decentralized Prediction Markets</title>
      <link>https://paperplaine.com/papers/per-market-information-leakage-and-order-flow-skill-two-methodological-lenses</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/per-market-information-leakage-and-order-flow-skill-two-methodological-lenses</guid>
      <pubDate>Mon, 04 May 2026 07:22:20 GMT</pubDate>
      <author>Maksym Nechepurenko</author>
      <category>Finance</category>
      <description>Three different ways to spot who&#39;s trading on secret information in prediction markets</description>
      <content:encoded><![CDATA[<p><em>Three different ways to spot who's trading on secret information in prediction markets</em></p><p>Researchers compared three methods for identifying informed traders on decentralized prediction markets and found they actually measure different things — not competing versions of the same measurement. One method flags accounts with consistent winning streaks, another identifies accounts behaving suspiciously over time, and a third measures how much information leaked into individual markets before public announcement. Using all three together catches more genuine insider traders than any single method alone.</p><p><strong>Why it matters:</strong> Prediction markets are increasingly used for real-world forecasting on politics, business, and science, but they only work if prices reflect genuine information rather than insider knowledge or manipulation. The framework here—demonstrated against a real DOJ indictment of a military officer who traded on nonpublic Venezuela intelligence—gives regulators and platform operators a practical toolkit to detect and stop informed traders before they undermine market integrity.</p>]]></content:encoded>
    </item>
    <item>
      <title>ActCam: Zero-Shot Joint Camera and 3D Motion Control for Video Generation</title>
      <link>https://paperplaine.com/papers/actcam-zero-shot-joint-camera-and-3d-motion-control-for-video-generation</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/actcam-zero-shot-joint-camera-and-3d-motion-control-for-video-generation</guid>
      <pubDate>Thu, 07 May 2026 17:59:58 GMT</pubDate>
      <author>Omar El Khalifi, Thomas Rossi, Oscar Fossey et al.</author>
      <category>AI</category>
      <description>Controlling both actor movement and camera angles in AI-generated videos</description>
      <content:encoded><![CDATA[<p><em>Controlling both actor movement and camera angles in AI-generated videos</em></p><p>A new method called ActCam lets filmmakers generate videos where they control both how an actor moves and where the camera points—without needing to train a custom AI model. By carefully layering pose and depth information at different stages of video generation, the system maintains geometric consistency and produces results that human raters prefer, especially when the camera makes large jumps to new angles.</p><p><strong>Why it matters:</strong> Video production typically requires either expensive motion capture setups or manual frame-by-frame editing to coordinate actor movement with camera work. ActCam works with existing AI video generators and requires no retraining, making professional-looking camera control accessible to independent filmmakers and artists who lack studio resources.</p>]]></content:encoded>
    </item>
    <item>
      <title>CLAD: A Clustered Label-Agnostic Federated Learning Framework for Joint Anomaly Detection and Attack Classification</title>
      <link>https://paperplaine.com/papers/clad-a-clustered-label-agnostic-federated-learning-framework-for-joint-anomaly</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/clad-a-clustered-label-agnostic-federated-learning-framework-for-joint-anomaly</guid>
      <pubDate>Thu, 07 May 2026 17:01:19 GMT</pubDate>
      <author>Iason Ofeidis, Nikos Papadis, Randeep Bhatia et al.</author>
      <category>Comp Sci</category>
      <description>Training security systems across IoT devices without sharing raw data</description>
      <content:encoded><![CDATA[<p><em>Training security systems across IoT devices without sharing raw data</em></p><p>A new framework called CLAD trains security systems across thousands of IoT devices while keeping data private and handling the reality that most collected data comes without labels. It achieves 30% better detection of network attacks than existing methods while using half the communication bandwidth, even when 80% of the data lacks security labels.</p><p><strong>Why it matters:</strong> As factories, smart homes, and critical infrastructure rely on millions of connected devices, security breaches can cascade rapidly across networks. CLAD makes it practical for these devices to collectively learn threat patterns without exposing sensitive operational data to central servers, while actually improving detection accuracy by making use of unlabeled data that would otherwise be wasted.</p>]]></content:encoded>
    </item>
    <item>
      <title>SNAPO: Smooth Neural Adjoint Policy Optimization for Optimal Control via Differentiable Simulation</title>
      <link>https://paperplaine.com/papers/snapo-smooth-neural-adjoint-policy-optimization-for-optimal-control-via</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/snapo-smooth-neural-adjoint-policy-optimization-for-optimal-control-via</guid>
      <pubDate>Thu, 07 May 2026 17:01:13 GMT</pubDate>
      <author>Dmitri Goloubentsev, Natalija Karpichina</author>
      <category>Math</category>
      <description>Training AI to make better decisions while instantly measuring risk exposure</description>
      <content:encoded><![CDATA[<p><em>Training AI to make better decisions while instantly measuring risk exposure</em></p><p>Researchers developed SNAPO, a method that trains neural networks to make sequential decisions in complex systems while simultaneously computing how sensitive those decisions are to different inputs and conditions. Unlike existing approaches that either solve small problems slowly or train fast but blind, SNAPO trains a policy in minutes while automatically generating thousands of sensitivity measurements at essentially no extra cost — a single backward pass produces both the training signal and all the risk metrics.</p><p><strong>Why it matters:</strong> Real-world decision systems need both speed and accountability. Energy traders need to know how their storage decisions respond to price swings; pension fund managers need to measure exposure across dozens of risk factors; pharmaceutical manufacturers must document how process changes affect product quality for regulators. SNAPO delivers these sensitivities during training rather than afterward, cutting computation time by orders of magnitude — sensitivity analysis that took hours now takes milliseconds — while keeping the same training budget. This makes AI-driven optimization practical for industries where understanding risk isn't optional.</p>]]></content:encoded>
    </item>
    <item>
      <title>StraTA: Incentivizing Agentic Reinforcement Learning with Strategic Trajectory Abstraction</title>
      <link>https://paperplaine.com/papers/strata-incentivizing-agentic-reinforcement-learning-with-strategic-trajectory</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/strata-incentivizing-agentic-reinforcement-learning-with-strategic-trajectory</guid>
      <pubDate>Thu, 07 May 2026 17:51:16 GMT</pubDate>
      <author>Xiangyuan Xue, Yifan Zhou, Zidong Wang et al.</author>
      <category>AI</category>
      <description>Teaching AI agents to plan ahead instead of just reacting moment-to-moment</description>
      <content:encoded><![CDATA[<p><em>Teaching AI agents to plan ahead instead of just reacting moment-to-moment</em></p><p>A new training method called StraTA helps large language models work better as decision-making agents by having them sketch out a high-level strategy before taking action. On three real-world task environments, the approach achieved success rates above 93% on some benchmarks and needed fewer training examples than existing methods.</p><p><strong>Why it matters:</strong> Current AI agents struggle with long chains of decisions because they react to each step without a plan, making them inefficient and error-prone. StraTA's strategy-first approach could improve AI assistants that handle complex real-world tasks like shopping, research, or household management—reducing the computing power and training data needed to get them working reliably.</p>]]></content:encoded>
    </item>
    <item>
      <title>The frame-level leakage trap: rethinking evaluation protocols for intrinsic image decomposition, with source-separable uncertainty as a case study</title>
      <link>https://paperplaine.com/papers/the-frame-level-leakage-trap-rethinking-evaluation-protocols-for-intrinsic</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/the-frame-level-leakage-trap-rethinking-evaluation-protocols-for-intrinsic</guid>
      <pubDate>Thu, 07 May 2026 14:37:16 GMT</pubDate>
      <author>Jihwan Woo</author>
      <category>Engineering</category>
      <description>How similar test frames secretly inflate computer vision scores by 10 decibels</description>
      <content:encoded><![CDATA[<p><em>How similar test frames secretly inflate computer vision scores by 10 decibels</em></p><p>Researchers discovered that a common way of testing image-decomposition algorithms on the MPI Sintel dataset inflates performance scores by 1.6 to 2.0 decibels because spatially similar frames from the same scene leak into both training and test sets. Using the correct evaluation method—splitting by scene rather than by frame—reveals that past reported results were significantly overstated, and the team proposes a new model that estimates uncertainty separately for different image components, allowing it to identify and filter out unreliable pixels with 77% error reduction.</p><p><strong>Why it matters:</strong> Accurate evaluation standards prevent researchers from chasing inflated performance numbers and wasting effort on algorithms that aren't actually better. The proposed uncertainty method also has practical value: by flagging which pixels it's unsure about, it enables downstream applications to discard unreliable regions and achieve much cleaner results—useful for any system relying on image decomposition in graphics, robotics, or computational photography.</p>]]></content:encoded>
    </item>
    <item>
      <title>Engineering a driven-dissipative bath of altermagnetic quantum magnons for controlling classical dynamics of spins hosting spin waves, domain walls, or skyrmions</title>
      <link>https://paperplaine.com/papers/engineering-a-driven-dissipative-bath-of-altermagnetic-quantum-magnons-for</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/engineering-a-driven-dissipative-bath-of-altermagnetic-quantum-magnons-for</guid>
      <pubDate>Thu, 07 May 2026 15:58:33 GMT</pubDate>
      <author>Felipe Reyes-Osorio, Branislav K. Nikolic</author>
      <category>Physics</category>
      <description>Using quantum magnets to remotely control classical magnetic waves and patterns</description>
      <content:encoded><![CDATA[<p><em>Using quantum magnets to remotely control classical magnetic waves and patterns</em></p><p>Physicists have designed a way to control magnetic behavior in one material by attaching a quantum magnetic layer next to it. The quantum layer acts like a bath that damps and drives the classical magnetic material, creating new ways to tune how magnetic waves, domain walls, and skyrmions (tiny magnetic vortices) move and disappear. This could let engineers manipulate magnetic dynamics without direct electrical or magnetic contact.</p><p><strong>Why it matters:</strong> Magnetic devices are central to data storage and computing, and most current approaches rely on direct control of the magnet itself. This technique offers a new handle for tuning magnetic behavior through an adjacent layer, potentially enabling more efficient or flexible designs for spintronic devices and magnonic circuits. It demonstrates a path to remotely shape how magnetic patterns propagate and annihilate, which matters for encoding and erasing information in next-generation magnetic memory.</p>]]></content:encoded>
    </item>
    <item>
      <title>MASPO: Joint Prompt Optimization for LLM-based Multi-Agent Systems</title>
      <link>https://paperplaine.com/papers/maspo-joint-prompt-optimization-for-llm-based-multi-agent-systems</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/maspo-joint-prompt-optimization-for-llm-based-multi-agent-systems</guid>
      <pubDate>Thu, 07 May 2026 17:35:26 GMT</pubDate>
      <author>Zhexuan Wang, Xuebo Liu, Li Wang et al.</author>
      <category>AI</category>
      <description>Automatically tuning instructions for AI teams that work together</description>
      <content:encoded><![CDATA[<p><em>Automatically tuning instructions for AI teams that work together</em></p><p>When multiple AI agents work together on a task, their individual instructions (prompts) need to work well not just in isolation, but as a coordinated system. A new framework called MASPO automatically improves these prompts by testing how well each agent's output helps the next agent succeed, rather than optimizing each agent separately. Tests across six different tasks show this approach outperforms existing methods by an average of 2.9 percentage points.</p><p><strong>Why it matters:</strong> As companies deploy multi-agent AI systems for complex work, getting these systems to actually cooperate effectively has been a major bottleneck—manually writing and tuning prompts for each agent is slow and often produces suboptimal teamwork. MASPO makes this process automatic and more effective, which could accelerate real-world deployment of AI systems handling tasks like research, customer service, or software development that require coordinated reasoning across multiple specialized agents.</p>]]></content:encoded>
    </item>
    <item>
      <title>Scaling the Queue: Reinforcement Learning for Equitable Call Classification Capacity in NYC Municipal Complaint Systems</title>
      <link>https://paperplaine.com/papers/scaling-the-queue-reinforcement-learning-for-equitable-call-classification</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/scaling-the-queue-reinforcement-learning-for-equitable-call-classification</guid>
      <pubDate>Thu, 07 May 2026 16:06:21 GMT</pubDate>
      <author>Irene Aldridge, Ellie Bae, Siddhesh Darak et al.</author>
      <category>Economics</category>
      <description>Using AI to route 311 complaints fairly across New York City neighborhoods</description>
      <content:encoded><![CDATA[<p><em>Using AI to route 311 complaints fairly across New York City neighborhoods</em></p><p>New York City's 311 complaint system can't keep up with incoming calls, causing longer waits and worse service in poorer neighborhoods. Researchers built an AI system that routes complaints more intelligently—by learning that neighborhoods with repeated complaints actually need faster action, not just those with the most calls. The system reduced unfair service gaps while handling more complaints without replacing human staff.</p><p><strong>Why it matters:</strong> NYC residents in low-income and communities of color have historically waited longer for building inspections and housing repairs. This AI system could cut those wait times by routing complaints to the right teams faster, meaning families get heat in winter or safe scaffolding fixed sooner. The approach also shows that fair service doesn't mean treating everyone identically—it means understanding which neighborhoods have persistent problems that need priority attention.</p>]]></content:encoded>
    </item>
    <item>
      <title>The Structural Origin of Attention Sink: Variance Discrepancy, Super Neurons, and Dimension Disparity</title>
      <link>https://paperplaine.com/papers/the-structural-origin-of-attention-sink-variance-discrepancy-super-neurons-and</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/the-structural-origin-of-attention-sink-variance-discrepancy-super-neurons-and</guid>
      <pubDate>Thu, 07 May 2026 17:28:55 GMT</pubDate>
      <author>Siquan Li, Kaiqi Jiang, Jiacheng Sun et al.</author>
      <category>Statistics</category>
      <description>Why language models obsess over the first word and how to fix it</description>
      <content:encoded><![CDATA[<p><em>Why language models obsess over the first word and how to fix it</em></p><p>Large language models tend to give disproportionate attention to initial tokens—a problem called "attention sink"—because of how they aggregate information and process data through their internal layers. Researchers traced this to a specific structural imbalance: early neurons create inconsistent signal strengths that force the model to anchor attention to the first token as a stabilizing mechanism. They proved this causal chain by deliberately triggering attention sinks at different positions, then tested a simple architectural fix that balanced the signals during training and sped up model convergence.</p><p><strong>Why it matters:</strong> Attention sinks waste computational resources and can degrade model performance by forcing the network to concentrate on irrelevant tokens. Understanding the root cause opens the door to cleaner, more efficient models—the architectural tweak the researchers tested could reduce training time and improve how language models process information, with potential benefits for speed and accuracy in real applications.</p>]]></content:encoded>
    </item>
    <item>
      <title>BAMI: Training-Free Bias Mitigation in GUI Grounding</title>
      <link>https://paperplaine.com/papers/bami-training-free-bias-mitigation-in-gui-grounding</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/bami-training-free-bias-mitigation-in-gui-grounding</guid>
      <pubDate>Thu, 07 May 2026 17:59:31 GMT</pubDate>
      <author>Borui Zhang, Bo Zhang, Bo Wang et al.</author>
      <category>AI</category>
      <description>Fixing AI agents that struggle to click the right button on complex screens</description>
      <content:encoded><![CDATA[<p><em>Fixing AI agents that struggle to click the right button on complex screens</em></p><p>AI systems that automate computer tasks often fail when screens are high-resolution or crowded with interface elements. A new technique called BAMI improves accuracy without requiring retraining—boosting one model's performance on a challenging benchmark from 52% to 58%—by breaking down the task into simpler steps and filtering out confusing options.</p><p><strong>Why it matters:</strong> As companies automate more customer service, data entry, and software testing with AI agents, these systems need to reliably click and interact with real websites and applications. This method works with existing AI models off-the-shelf, making it immediately useful for improving the accuracy of automation tools without the expense and time of rebuilding them from scratch.</p>]]></content:encoded>
    </item>
    <item>
      <title>Superposition Is Not Necessary: A Mechanistic Interpretability Analysis of Transformer Representations for Time Series Forecasting</title>
      <link>https://paperplaine.com/papers/superposition-is-not-necessary-a-mechanistic-interpretability-analysis-of</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/superposition-is-not-necessary-a-mechanistic-interpretability-analysis-of</guid>
      <pubDate>Wed, 06 May 2026 17:23:27 GMT</pubDate>
      <author>Alper Yıldırım</author>
      <category>AI</category>
      <description>Why transformers for time series don&#39;t need complex hidden patterns</description>
      <content:encoded><![CDATA[<p><em>Why transformers for time series don't need complex hidden patterns</em></p><p>Transformers work well for predicting time series, but researchers wanted to understand how—specifically whether they use the same clever internal trick (called superposition) that makes them powerful for language. By examining a transformer trained on forecasting, they found transformers actually keep things simple: they don't compress multiple patterns into the same neurons, and they ignore most of their hidden layers when making predictions. This helps explain why straightforward linear models stay competitive with far more complex transformer models.</p><p><strong>Why it matters:</strong> Companies spend millions deploying expensive transformer models for forecasting tasks when simpler, cheaper alternatives work nearly as well. Understanding that transformers aren't actually using sophisticated compositional tricks on time series means practitioners can stop assuming complexity equals better performance and instead choose based on speed, cost, and actual accuracy on their specific problem. This could shift forecasting systems toward simpler, more interpretable models without sacrificing results.</p>]]></content:encoded>
    </item>
    <item>
      <title>On the (In-)Security of the Shuffling Defense in the Transformer Secure Inference</title>
      <link>https://paperplaine.com/papers/on-the-in-security-of-the-shuffling-defense-in-the-transformer-secure-inference</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/on-the-in-security-of-the-shuffling-defense-in-the-transformer-secure-inference</guid>
      <pubDate>Wed, 06 May 2026 13:31:15 GMT</pubDate>
      <author>Zhengyi Li, Yakai Wang, Kang Yang et al.</author>
      <category>Comp Sci</category>
      <description>How shuffling AI model outputs doesn&#39;t actually hide them from hackers</description>
      <content:encoded><![CDATA[<p><em>How shuffling AI model outputs doesn't actually hide them from hackers</em></p><p>A security technique meant to protect AI models during remote computation—shuffling the model's internal activations before revealing them—can be broken for about $1 worth of queries. Researchers show how to align these shuffled values back to their original order, then use them to recover the model's actual weights, demonstrating the attack works on real models like GPT-2.</p><p><strong>Why it matters:</strong> As AI systems move to cloud computing, companies rely on cryptographic defenses to keep model weights secret while still computing results. This attack shows a widely-used shuffling defense provides a false sense of security—meaning companies using it may think their models are protected when they're actually vulnerable to cheap theft. Developers now need better defenses before deploying sensitive models to untrusted servers.</p>]]></content:encoded>
    </item>
    <item>
      <title>Automatically Finding and Validating Unexpected Side-Effects of Interventions on Language Models</title>
      <link>https://paperplaine.com/papers/automatically-finding-and-validating-unexpected-side-effects-of-interventions</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/automatically-finding-and-validating-unexpected-side-effects-of-interventions</guid>
      <pubDate>Wed, 06 May 2026 16:27:23 GMT</pubDate>
      <author>Quintin Pope, Ajay Hayagreeve Balaji, Jacques Thibodeau et al.</author>
      <category>AI</category>
      <description>Automatically discovering hidden side effects when tweaking AI language models</description>
      <content:encoded><![CDATA[<p><em>Automatically discovering hidden side effects when tweaking AI language models</em></p><p>Researchers built an automated system that compares how a language model behaves before and after an intervention—like when engineers try to make it forget certain information or reason better—and generates human-readable descriptions of what changed. Testing on three real interventions (reasoning training, knowledge editing, and unlearning), the system caught both intended changes and unexpected behavioral shifts that engineers hadn't anticipated.</p><p><strong>Why it matters:</strong> AI companies make constant changes to their language models, but it's extremely difficult to know all the ways those changes affect behavior beyond the intended goal. This tool lets engineers systematically audit what else changed, catching surprises before models are deployed. That's critical for safety: a fix intended to make a model more helpful might accidentally make it worse at something else, and discovering that requires more than checking the intended behavior.</p>]]></content:encoded>
    </item>
    <item>
      <title>Symmetric Bessmertnyĭ Realizations and Field Extension Problems in Characteristic 2 - A Differential Algebra Approach</title>
      <link>https://paperplaine.com/papers/symmetric-bessmertnyi-realizations-and-field-extension-problems-in</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/symmetric-bessmertnyi-realizations-and-field-extension-problems-in</guid>
      <pubDate>Wed, 06 May 2026 13:38:01 GMT</pubDate>
      <author>Soumya Sinha Babu, Aaron Welters</author>
      <category>Math</category>
      <description>A simpler way to check when complex systems have valid mathematical structures</description>
      <content:encoded><![CDATA[<p><em>A simpler way to check when complex systems have valid mathematical structures</em></p><p>Mathematicians found a purely algebraic method to verify when certain matrix structures—called Symmetric Bessmertnyĭ realizations—can exist in characteristic 2 fields, a setting where ordinary arithmetic rules break down. The new approach uses calculus-like tools on rational functions to reduce the problem from checking entire matrices to checking just their diagonal entries, making verification much simpler.</p><p><strong>Why it matters:</strong> Linear systems theory relies on these realizations to describe how systems behave, and the new algebraic proof works in characteristic 2 fields, which appear in coding theory and digital systems where all arithmetic happens modulo 2. The simpler method makes it practical to verify whether a given system has a valid mathematical representation without running complex algorithms, and also reveals new connections between realizability and field extensions that could inform future designs.</p>]]></content:encoded>
    </item>
    <item>
      <title>Release-free electro-optomechanical crystal modulator</title>
      <link>https://paperplaine.com/papers/release-free-electro-optomechanical-crystal-modulator</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/release-free-electro-optomechanical-crystal-modulator</guid>
      <pubDate>Wed, 06 May 2026 17:54:13 GMT</pubDate>
      <author>Paul Burger, Joey Frey, Johan Kolvik et al.</author>
      <category>Physics</category>
      <description>A better bridge between quantum computers and fiber optic networks</description>
      <content:encoded><![CDATA[<p><em>A better bridge between quantum computers and fiber optic networks</em></p><p>Researchers built a device that converts signals between microwave circuits in quantum computers and optical fibers with less thermal noise than previous designs. By combining two materials—silicon and lithium niobate—using a precise printing technique, they achieved the strong signal conversion needed for practical quantum-to-optical communication.</p><p><strong>Why it matters:</strong> Quantum computers currently sit isolated on lab benches because they can't efficiently send information over long distances. This device could become the missing link that lets distant quantum computers talk to each other and to optical networks, making large-scale quantum computing infrastructure actually possible.</p>]]></content:encoded>
    </item>
    <item>
      <title>Flow Sampling: Learning to Sample from Unnormalized Densities via Denoising Conditional Processes</title>
      <link>https://paperplaine.com/papers/flow-sampling-learning-to-sample-from-unnormalized-densities-via-denoising</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/flow-sampling-learning-to-sample-from-unnormalized-densities-via-denoising</guid>
      <pubDate>Tue, 05 May 2026 17:07:37 GMT</pubDate>
      <author>Aaron Havens, Brian Karrer, Neta Shaul</author>
      <category>AI</category>
      <description>Teaching AI to sample from mathematical functions without wasting computation</description>
      <content:encoded><![CDATA[<p><em>Teaching AI to sample from mathematical functions without wasting computation</em></p><p>Researchers developed Flow Sampling, a method that lets AI systems efficiently generate samples from complex mathematical distributions defined by energy functions—without needing actual data to learn from. The technique cuts down how many times the expensive energy function must be evaluated during training, and works not just in ordinary space but also on curved mathematical surfaces like spheres and hyperbolic geometries.</p><p><strong>Why it matters:</strong> Many real problems in physics, chemistry, and statistics require sampling from distributions where you know the underlying energy function but can't directly sample from it. This method makes that process far cheaper computationally, opening the door to faster simulations of molecular structures, protein folding, and other complex systems where brute-force sampling would be prohibitively expensive.</p>]]></content:encoded>
    </item>
    <item>
      <title>Deepening the Secondary Market: Integrating Trade Credit into Market Clearing with the Cycles Protocol</title>
      <link>https://paperplaine.com/papers/deepening-the-secondary-market-integrating-trade-credit-into-market-clearing</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/deepening-the-secondary-market-integrating-trade-credit-into-market-clearing</guid>
      <pubDate>Mon, 04 May 2026 10:34:50 GMT</pubDate>
      <author>Tomaž Fleischman, Ethan Buchman</author>
      <category>Finance</category>
      <description>Unlocking trillions in hidden business debt to speed up payments</description>
      <content:encoded><![CDATA[<p><em>Unlocking trillions in hidden business debt to speed up payments</em></p><p>Most payment systems ignore trade credit—the informal IOUs between businesses that represent enormous untapped liquidity. A new protocol called Cycles can find and clear these debts directly without requiring a middleman to take on the risk, potentially integrating trillions of dollars in business-to-business lending into formal settlement systems.</p><p><strong>Why it matters:</strong> Businesses currently wait weeks to settle payments because trade credit sits outside official clearing systems. By tapping this hidden liquidity, companies could access cash faster and cut the working capital they need to tie up. This could be especially powerful for small suppliers and developing economies where informal credit chains are most common and access to capital is most constrained.</p>]]></content:encoded>
    </item>
    <item>
      <title>Electroencephalography and Electromyography as a Non-Invasive Biomarker of Neural Regeneration: A Review of Central and Peripheral Nervous System Injury and Regeneration</title>
      <link>https://paperplaine.com/papers/electroencephalography-and-electromyography-as-a-non-invasive-biomarker-of</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/electroencephalography-and-electromyography-as-a-non-invasive-biomarker-of</guid>
      <pubDate>Sun, 03 May 2026 07:59:27 GMT</pubDate>
      <author>Maryam Kheyrollah, Reza Khanbabaie, Chris Ullrich et al.</author>
      <category>Biology</category>
      <description>Using brain and muscle electrical signals to track nerve healing after injury</description>
      <content:encoded><![CDATA[<p><em>Using brain and muscle electrical signals to track nerve healing after injury</em></p><p>Brain waves (EEG) and muscle signals (EMG) can monitor whether nerves are actually healing after injury, offering doctors a non-invasive way to track recovery in real time. The two measurements work together: EEG reveals how the brain is reorganizing after damage, while EMG shows whether muscles are regaining function as peripheral nerves reconnect.</p><p><strong>Why it matters:</strong> Nerve injuries from stroke or spinal cord damage are hard to assess — doctors can't easily tell if healing is happening without invasive procedures. Being able to track recovery with simple electrical readings from skin electrodes would let clinicians adjust treatment earlier, predict which patients will recover function, and measure whether new therapies actually work. This bridges the gap between understanding what's happening at the molecular level and knowing whether patients are actually getting better.</p>]]></content:encoded>
    </item>
    <item>
      <title>Feature-Augmented Transformers for Robust AI-Text Detection Across Domains and Generators</title>
      <link>https://paperplaine.com/papers/feature-augmented-transformers-for-robust-ai-text-detection-across-domains-and</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/feature-augmented-transformers-for-robust-ai-text-detection-across-domains-and</guid>
      <pubDate>Tue, 05 May 2026 16:52:26 GMT</pubDate>
      <author>Mohamed Mady, Johannes Reschke, Björn Schuller</author>
      <category>AI</category>
      <description>Making AI-text detectors work reliably across different sources and writing styles</description>
      <content:encoded><![CDATA[<p><em>Making AI-text detectors work reliably across different sources and writing styles</em></p><p>Detectors trained to spot AI-generated text perform near-perfectly on familiar material but fail badly when encountering text from new sources or generators—a problem researchers call brittleness. Adding linguistic features like readability and vocabulary patterns to a transformer model improved performance across different domains, pushing balanced accuracy from around 60% to 86% when tested on unfamiliar text.</p><p><strong>Why it matters:</strong> As AI systems generate text at scale across the internet, platforms need detectors that actually work in the real world, not just in controlled testing. This research shows that simple feature engineering can make detectors three times more reliable when encountering new types of AI generators, making them practically useful for content moderation and detection systems that can't be retrained constantly.</p>]]></content:encoded>
    </item>
    <item>
      <title>Conditional Diffusion Sampling</title>
      <link>https://paperplaine.com/papers/conditional-diffusion-sampling</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/conditional-diffusion-sampling</guid>
      <pubDate>Tue, 05 May 2026 17:36:29 GMT</pubDate>
      <author>Francisco M. Castro-Macías, Pablo Morales-Álvarez, Saifuddin Syed et al.</author>
      <category>Statistics</category>
      <description>A faster way to sample from messy, multimodal probability distributions</description>
      <content:encoded><![CDATA[<p><em>A faster way to sample from messy, multimodal probability distributions</em></p><p>Researchers combined two established sampling methods—Parallel Tempering and diffusion models—into a hybrid approach that requires no neural network training. The new method uses Parallel Tempering to explore the overall landscape first, then applies a mathematically exact transport process to refine samples locally, achieving better results with fewer probability evaluations than existing methods.</p><p><strong>Why it matters:</strong> Sampling from complex probability distributions is central to machine learning, physics simulations, and Bayesian statistics. Current methods either require extensive training or many expensive probability evaluations. This hybrid approach cuts the computational cost of generating high-quality samples, which directly speeds up inference in scientific computing, drug discovery, and probabilistic machine learning models where every probability calculation is expensive.</p>]]></content:encoded>
    </item>
    <item>
      <title>Do Venture Capitalists Beat Random Allocation?</title>
      <link>https://paperplaine.com/papers/do-venture-capitalists-beat-random-allocation</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/do-venture-capitalists-beat-random-allocation</guid>
      <pubDate>Tue, 05 May 2026 17:04:54 GMT</pubDate>
      <author>Max Sina Knicker, Jean-Philippe Bouchaud, Michael Benzaquen</author>
      <category>Economics</category>
      <description>Why venture capitalists&#39; picks look no better than random luck</description>
      <content:encoded><![CDATA[<p><em>Why venture capitalists' picks look no better than random luck</em></p><p>Venture capital investors pick companies that perform almost identically to what chance alone would predict, when accounting for timing, location, and industry. Even the best-performing VC portfolios don't beat the outcomes expected from random selection, suggesting that skill in choosing individual companies is nearly impossible to detect in an industry dominated by a handful of huge winners.</p><p><strong>Why it matters:</strong> This finding challenges the premise that venture capitalists earn their 2-and-20 fees through superior judgment. If VC performance is indistinguishable from random allocation, it raises hard questions about whether investors should pay premium fees for what amounts to passive exposure to startups. The same pattern holds for stock analysts picking companies, suggesting skill is difficult to prove in any extreme winner-take-most market.</p>]]></content:encoded>
    </item>
    <item>
      <title>SpecKV: Adaptive Speculative Decoding with Compression-Aware Gamma Selection</title>
      <link>https://paperplaine.com/papers/speckv-adaptive-speculative-decoding-with-compression-aware-gamma-selection</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/speckv-adaptive-speculative-decoding-with-compression-aware-gamma-selection</guid>
      <pubDate>Mon, 04 May 2026 17:55:05 GMT</pubDate>
      <author>Shikhar Shukla</author>
      <category>AI</category>
      <description>Speeding up AI by automatically adjusting how many words to guess ahead</description>
      <content:encoded><![CDATA[<p><em>Speeding up AI by automatically adjusting how many words to guess ahead</em></p><p>A new system called SpecKV automatically tunes how many tokens a small AI model should propose at each step during the verification process that speeds up large language models. By reading signals from the draft model itself—like how confident it is in its guesses—SpecKV picks the best number of proposals for each moment, delivering 56% faster results than the current fixed approach with almost no added slowdown.</p><p><strong>Why it matters:</strong> Large language models power chatbots, search, and countless AI applications, and making them faster directly cuts energy costs and lets more people access them affordably. A 56% speedup with minimal overhead means faster responses for users and significantly lower compute bills for companies running these systems at scale.</p>]]></content:encoded>
    </item>
    <item>
      <title>TRACED: In vivo imaging of extracellular intrinsic diffusivity, tortuosity, cell size distribution and cell density in human glioma patients</title>
      <link>https://paperplaine.com/papers/traced-in-vivo-imaging-of-extracellular-intrinsic-diffusivity-tortuosity-cell</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/traced-in-vivo-imaging-of-extracellular-intrinsic-diffusivity-tortuosity-cell</guid>
      <pubDate>Mon, 04 May 2026 14:03:48 GMT</pubDate>
      <author>Joshua K. Marchant, Hong-Hsi Lee, Elizabeth R. Gerstner et al.</author>
      <category>Engineering</category>
      <description>Reading tumor cell size and density from brain MRI scans without a biopsy</description>
      <content:encoded><![CDATA[<p><em>Reading tumor cell size and density from brain MRI scans without a biopsy</em></p><p>Researchers developed TRACED, a new method that extracts detailed information about tumor structure directly from standard MRI scans of brain cancer patients. The technique measures cell size, cell density, and how easily water moves through tumor tissue — measurements previously only possible through invasive biopsies — and the team verified these measurements against actual tumor tissue samples from two patients.</p><p><strong>Why it matters:</strong> Brain tumor surgery and treatment decisions depend on understanding tumor structure, but biopsies are invasive, risky, and only sample one small location. This MRI-based approach could let doctors assess tumor properties across the entire tumor without any biopsy, potentially improving treatment planning and monitoring how tumors respond to therapy.</p>]]></content:encoded>
    </item>
    <item>
      <title>Note on Strong Quantum Markov Properties</title>
      <link>https://paperplaine.com/papers/note-on-strong-quantum-markov-properties</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/note-on-strong-quantum-markov-properties</guid>
      <pubDate>Mon, 04 May 2026 17:49:07 GMT</pubDate>
      <author>Chi-Fang Chen</author>
      <category>Physics</category>
      <description>When quantum systems reveal their secrets through local measurements</description>
      <content:encoded><![CDATA[<p><em>When quantum systems reveal their secrets through local measurements</em></p><p>A quantum state satisfies a "strong Markov property" if you can recover lost information about it by measuring just one copy and applying a local fix — and this works the same way regardless of what you actually measure. The researchers show this property is equivalent to a simpler mathematical condition: correlations must decay in a particular way, and they prove three surprising consequences, including that you can estimate multiple properties of a quantum state from a single measurement.</p><p><strong>Why it matters:</strong> Quantum systems are notoriously fragile and hard to measure. This result shows that under certain conditions — when a quantum state has the strong Markov property — you don't need many copies or elaborate measurement schemes to extract useful information. This could simplify how we extract information from quantum devices and systems in the lab, and it deepens our understanding of which quantum states are easier to work with in practice.</p>]]></content:encoded>
    </item>
    <item>
      <title>mdok-style at SemEval-2026 Task 9: Finetuning LLMs for Multilingual Polarization Detection</title>
      <link>https://paperplaine.com/papers/mdok-style-at-semeval-2026-task-9-finetuning-llms-for-multilingual-polarization</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/mdok-style-at-semeval-2026-task-9-finetuning-llms-for-multilingual-polarization</guid>
      <pubDate>Mon, 04 May 2026 15:08:24 GMT</pubDate>
      <author>Dominik Macko, Alok Debnath, Jakub Simko</author>
      <category>AI</category>
      <description>Spotting inflammatory speech across 22 languages before it turns toxic</description>
      <content:encoded><![CDATA[<p><em>Spotting inflammatory speech across 22 languages before it turns toxic</em></p><p>Researchers built an AI system to detect polarizing content online across 22 languages by finetuning large language models with a technique that keeps computational costs manageable. They strengthened the system by training it on multiple versions of the same text—anonymized, capitalized differently, and with character substitutions—making it more likely to catch polarization even when people use tricks to avoid detection.</p><p><strong>Why it matters:</strong> Online polarization often escalates into hate speech and social division. Catching inflammatory rhetoric early, across languages and cultures, gives platforms a practical tool to intervene before discussions turn hostile. The approach also shows how to build multilingual AI systems efficiently, without needing expensive computational resources.</p>]]></content:encoded>
    </item>
    <item>
      <title>Towards Improving Speaker Distance Estimation through Generative Impulse Response Augmentation</title>
      <link>https://paperplaine.com/papers/towards-improving-speaker-distance-estimation-through-generative-impulse</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/towards-improving-speaker-distance-estimation-through-generative-impulse</guid>
      <pubDate>Fri, 01 May 2026 15:08:42 GMT</pubDate>
      <author>Anton Ratnarajah, Mehmet Ergezer, Arun Nair et al.</author>
      <category>AI</category>
      <description>Using artificial sound reflections to help systems pinpoint where speakers are standing</description>
      <content:encoded><![CDATA[<p><em>Using artificial sound reflections to help systems pinpoint where speakers are standing</em></p><p>Researchers improved distance estimation accuracy by generating synthetic acoustic data to train AI models. The approach reduced localization error by up to 68% across different room types—bringing average errors down from 2.18 meters to 0.69 meters in some settings.</p><p><strong>Why it matters:</strong> Accurate speaker distance estimation matters for hearing aids, video conferencing systems, and spatial audio applications that need to know where someone is in a room. Real acoustic recordings are expensive and limited; this method shows that artificially generated sound reflections can work just as well for training, making it faster and cheaper to build better location-aware audio systems.</p>]]></content:encoded>
    </item>
    <item>
      <title>Unsupervised Denoising of Real Clinical Low Dose Liver CT with Perceptual Attention Networks</title>
      <link>https://paperplaine.com/papers/unsupervised-denoising-of-real-clinical-low-dose-liver-ct-with-perceptual</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/unsupervised-denoising-of-real-clinical-low-dose-liver-ct-with-perceptual</guid>
      <pubDate>Fri, 01 May 2026 17:19:15 GMT</pubDate>
      <author>Jingxi Pu, Tonghua Liu, Zhilin Guan et al.</author>
      <category>Comp Sci</category>
      <description>Cleaning up blurry CT scans without needing perfect reference images</description>
      <content:encoded><![CDATA[<p><em>Cleaning up blurry CT scans without needing perfect reference images</em></p><p>Researchers developed an artificial intelligence system that removes noise from low-dose CT scans without requiring paired clean images for training—a major obstacle in medical imaging. The system was tested on real clinical scans and validated by radiologists, achieving results comparable to supervised methods while solving the practical problem that hospitals rarely have perfectly clean versions of the same scan to learn from.</p><p><strong>Why it matters:</strong> Low-dose CT reduces radiation risk to patients, but the grainy images can make tumors and other abnormalities harder to spot, potentially leading to missed diagnoses. This technique cleans up those images automatically using only the noisy scans themselves, making it immediately usable in hospitals without requiring expensive paired training data. Radiologists who reviewed the results confirmed it meets clinical standards, meaning patients could get safer imaging without sacrificing diagnostic clarity.</p>]]></content:encoded>
    </item>
    <item>
      <title>Optimal Merton&#39;s Problem under Multivariate Affine Volterra Models with Jumps</title>
      <link>https://paperplaine.com/papers/optimal-merton-s-problem-under-multivariate-affine-volterra-models-with-jumps</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/optimal-merton-s-problem-under-multivariate-affine-volterra-models-with-jumps</guid>
      <pubDate>Fri, 01 May 2026 14:21:35 GMT</pubDate>
      <author>Sigui Brice Dro, Emmanuel Gnabeyeu</author>
      <category>Math</category>
      <description>How investors should rebalance portfolios when markets jump unpredictably</description>
      <content:encoded><![CDATA[<p><em>How investors should rebalance portfolios when markets jump unpredictably</em></p><p>Investors often adjust their portfolios based on past market patterns, but real markets jump suddenly and have memory — past prices influence future ones in ways classical models ignore. This paper solves the classic portfolio-balancing problem for these more realistic, jumpy markets with memory, deriving concrete investment strategies that account for both kinds of market friction.</p><p><strong>Why it matters:</strong> Standard portfolio advice assumes smooth, memoryless markets — assumptions that fail during crashes and volatility clusters. This work provides investors and fund managers with mathematically rigorous strategies tailored to real market behavior, potentially improving returns and risk management when applied to multi-asset portfolios.</p>]]></content:encoded>
    </item>
    <item>
      <title>Position: agentic AI orchestration should be Bayes-consistent</title>
      <link>https://paperplaine.com/papers/position-agentic-ai-orchestration-should-be-bayes-consistent</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/position-agentic-ai-orchestration-should-be-bayes-consistent</guid>
      <pubDate>Fri, 01 May 2026 15:43:43 GMT</pubDate>
      <author>Theodore Papamarkou, Pierre Alquier, Matthias Bauer et al.</author>
      <category>AI</category>
      <description>Why AI assistants need better decision-making rules for choosing which tools to use</description>
      <content:encoded><![CDATA[<p><em>Why AI assistants need better decision-making rules for choosing which tools to use</em></p><p>Large language models are good at predicting and reasoning, but bad at making decisions when stakes are high—like choosing which expert to ask or how much to spend. This paper argues that AI systems should use Bayesian probability rules at the control layer that decides which tools to deploy, rather than trying to make the language models themselves fully probabilistic, because this approach is practical and mathematically sound for real-world decisions under uncertainty.</p><p><strong>Why it matters:</strong> When an AI system decides to call a specialist, request more data, or allocate resources, getting that call wrong can be expensive or risky. Using Bayesian decision theory at the orchestration level means the system tracks what it actually knows, updates beliefs as it gathers information, and chooses actions deliberately rather than by default. This framework also makes human-AI collaboration clearer: humans can see what the system believes and why it made a choice, making the system's reasoning auditable and correctable.</p>]]></content:encoded>
    </item>
    <item>
      <title>Foresight Arena: An On-Chain Benchmark for Evaluating AI Forecasting Agents</title>
      <link>https://paperplaine.com/papers/foresight-arena-an-on-chain-benchmark-for-evaluating-ai-forecasting-agents</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/foresight-arena-an-on-chain-benchmark-for-evaluating-ai-forecasting-agents</guid>
      <pubDate>Fri, 01 May 2026 05:33:10 GMT</pubDate>
      <author>Maksym Nechepurenko, Pavel Shuvalov</author>
      <category>Finance</category>
      <description>A blockchain-based test for AI that can actually predict the future</description>
      <content:encoded><![CDATA[<p><em>A blockchain-based test for AI that can actually predict the future</em></p><p>Researchers built an on-chain benchmark that measures whether AI forecasting agents can genuinely predict real-world events better than existing markets, rather than just copying market prices or getting lucky with timing. The system uses blockchain smart contracts to prevent cheating and applies statistical scoring rules that reward honest probability estimates, and testing shows that detecting a real forecasting edge requires roughly 350 predictions—far more than most existing evaluations.</p><p><strong>Why it matters:</strong> Most AI forecasting systems today are evaluated on static datasets or by their trading profits, both of which hide whether an AI actually has predictive skill or just got lucky with market timing and position sizing. This benchmark lets anyone trustlessly evaluate AI forecasting agents on real prediction markets with proper statistical incentives, cutting through the noise to identify which systems genuinely see the future more clearly than crowds do. For AI companies and traders, it's a way to separate signal from noise; for the broader AI safety community, it's a model for building evaluations resistant to overfitting and centralized gaming.</p>]]></content:encoded>
    </item>
    <item>
      <title>Adaptive Querying with AI Persona Priors</title>
      <link>https://paperplaine.com/papers/adaptive-querying-with-ai-persona-priors</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/adaptive-querying-with-ai-persona-priors</guid>
      <pubDate>Fri, 01 May 2026 14:34:25 GMT</pubDate>
      <author>Kaizheng Wang, Yuhang Wu, Assaf Zeevi</author>
      <category>Statistics</category>
      <description>Using AI personas to ask smarter survey questions with limited budgets</description>
      <content:encoded><![CDATA[<p><em>Using AI personas to ask smarter survey questions with limited budgets</em></p><p>Researchers developed a new method for adaptive surveys that uses artificial intelligence personas—templates of how different types of people respond—to predict what questions will be most informative to ask next. Rather than relying on rigid statistical models or expensive computations, the approach treats each person as belonging to one of several AI-generated persona types, which allows for quick, accurate predictions and efficient question selection even when surveying new populations or asking about unfamiliar topics.</p><p><strong>Why it matters:</strong> Surveys and tests that adapt their questions based on previous answers can extract more reliable information while asking fewer questions—cutting costs and reducing respondent fatigue. This method makes adaptive surveying practical for real applications like market research, psychological assessment, and opinion polling, especially when you're starting fresh with a new population and can't rely on historical data. The approach also produces interpretable results: you learn not just what someone thinks, but which persona type they resemble, offering actionable insights alongside raw answers.</p>]]></content:encoded>
    </item>
    <item>
      <title>Beyond Gaussian Bottlenecks: Topologically Aligned Encoding of Vision-Transformer Feature Spaces</title>
      <link>https://paperplaine.com/papers/beyond-gaussian-bottlenecks-topologically-aligned-encoding-of-vision</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/beyond-gaussian-bottlenecks-topologically-aligned-encoding-of-vision</guid>
      <pubDate>Thu, 30 Apr 2026 17:12:31 GMT</pubDate>
      <author>Andrew Bond, Ilkin Umut Melanlioglu, Erkut Erdem et al.</author>
      <category>AI</category>
      <description>Better 3D geometry in AI videos by redesigning how models compress visual information</description>
      <content:encoded><![CDATA[<p><em>Better 3D geometry in AI videos by redesigning how models compress visual information</em></p><p>Video models often generate plausible motion but fail to preserve real 3D geometry and camera movement. Researchers developed S²VAE, which replaces conventional compression methods with a geometry-aware design that forces the model to think in terms of 3D space, depth, and physical structure rather than appearance alone—and showed this approach consistently outperforms existing methods, especially when heavy compression is needed.</p><p><strong>Why it matters:</strong> Video synthesis systems power everything from robotics simulation to 3D content creation. Models that properly preserve 3D geometry and camera physics produce more realistic, physically plausible outputs and could reduce the need for expensive manual corrections or post-processing. This approach also makes visual models more useful for tasks like autonomous navigation, where physical accuracy isn't optional.</p>]]></content:encoded>
    </item>
    <item>
      <title>A Thermodynamic Analysis of Enhanced Metastability in Isochoric Supercooled Liquids</title>
      <link>https://paperplaine.com/papers/a-thermodynamic-analysis-of-enhanced-metastability-in-isochoric-supercooled</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/a-thermodynamic-analysis-of-enhanced-metastability-in-isochoric-supercooled</guid>
      <pubDate>Wed, 29 Apr 2026 05:06:45 GMT</pubDate>
      <author>Boris Rubinsky</author>
      <category>Biology</category>
      <description>Why freezing liquids in sealed containers keeps them liquid longer</description>
      <content:encoded><![CDATA[<p><em>Why freezing liquids in sealed containers keeps them liquid longer</em></p><p>Keeping a liquid at constant volume instead of letting it expand prevents ice crystals from forming — even at temperatures well below freezing. The researchers proved this thermodynamically by showing that sealed containers create a weaker push toward solidification than open ones do, making ice nucleation exponentially less likely.</p><p><strong>Why it matters:</strong> Supercooled liquids (water that's frozen solid in temperature but still liquid in structure) have real uses in cryopreservation and medical storage. Understanding how to keep them stable longer without chemical additives could improve organ transplant viability and reduce biological sample damage during freezing procedures.</p>]]></content:encoded>
    </item>
    <item>
      <title>The Signal Credibility Index for Prediction Markets: A Microstructure-Grounded Diagnostic with Weighted and Time-Varying Extensions</title>
      <link>https://paperplaine.com/papers/the-signal-credibility-index-for-prediction-markets-a-microstructure-grounded</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/the-signal-credibility-index-for-prediction-markets-a-microstructure-grounded</guid>
      <pubDate>Wed, 29 Apr 2026 17:19:45 GMT</pubDate>
      <author>Maksym Nechepurenko</author>
      <category>Economics</category>
      <description>Telling real market signals from trading noise and manipulation</description>
      <content:encoded><![CDATA[<p><em>Telling real market signals from trading noise and manipulation</em></p><p>Prediction markets move for many reasons — genuine new information, temporary trading pressure, large traders repositioning, or coordinated manipulation — but their prices treat all these moves as equivalent. This paper develops a diagnostic tool that distinguishes between them, identifying which price moves reflect durable market insights and which are fleeting or deceptive.</p><p><strong>Why it matters:</strong> Prediction markets are used to forecast election outcomes, pandemic severity, and tech breakthroughs — decisions that depend on whether price movements mean something real. If traders or manipulators can make prices move without providing genuine information, the market becomes less reliable for forecasting. This index makes it possible to flag when a price move might be noise or manipulation rather than actual wisdom.</p>]]></content:encoded>
    </item>
    <item>
      <title>Splitting Argumentation Frameworks with Collective Attacks and Supports</title>
      <link>https://paperplaine.com/papers/splitting-argumentation-frameworks-with-collective-attacks-and-supports</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/splitting-argumentation-frameworks-with-collective-attacks-and-supports</guid>
      <pubDate>Thu, 30 Apr 2026 17:01:06 GMT</pubDate>
      <author>Matti Berthold, Lydia Blümel, Giovanni Buraglio et al.</author>
      <category>AI</category>
      <description>Breaking complex arguments into manageable pieces while keeping group logic intact</description>
      <content:encoded><![CDATA[<p><em>Breaking complex arguments into manageable pieces while keeping group logic intact</em></p><p>Researchers developed new techniques to split apart complex argumentation systems that include both collective attacks (where multiple arguments gang up against one) and supports (where arguments reinforce each other). These splitting methods let computers handle larger, messier real-world arguments by breaking them into smaller pieces while preserving the logical relationships that make arguments work or fail together.</p><p><strong>Why it matters:</strong> Argumentation systems power AI systems that need to reason through competing claims—from legal judgment automation to medical diagnosis support. Making these systems faster and more scalable by splitting them intelligently means they can handle realistic, large-scale problems rather than toy examples. This is especially important because real arguments rarely come in clean, flat structures; they're full of interdependencies where one claim supports several others while simultaneously being attacked by groups of opposing claims.</p>]]></content:encoded>
    </item>
    <item>
      <title>Quantum Lattice Boltzmann Solutions for Transport under 3D Spatially Varying Advection on Trapped Ion Hardware</title>
      <link>https://paperplaine.com/papers/quantum-lattice-boltzmann-solutions-for-transport-under-3d-spatially-varying</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/quantum-lattice-boltzmann-solutions-for-transport-under-3d-spatially-varying</guid>
      <pubDate>Thu, 30 Apr 2026 17:11:13 GMT</pubDate>
      <author>Sayonee Ray, Jezer Jojo, Jason Iaconis et al.</author>
      <category>Physics</category>
      <description>Running fluid flow simulations on quantum computers with realistic conditions</description>
      <content:encoded><![CDATA[<p><em>Running fluid flow simulations on quantum computers with realistic conditions</em></p><p>Researchers demonstrated that quantum computers can simulate how fluids move and mix under varying flow patterns — a step toward realistic fluid dynamics calculations on quantum hardware. Using IonQ's trapped-ion systems, they solved the advection-diffusion equation in three dimensions and identified a major bottleneck: repeatedly reading out and reloading fluid density data. They propose using a technique called MPS shadow tomography to make this process faster at scale.</p><p><strong>Why it matters:</strong> Quantum computers could eventually simulate complex fluid dynamics far faster than classical computers, with applications in aircraft design, weather prediction, and chemical engineering. This work moves beyond toy problems to conditions closer to what engineers actually need to model. However, the current readout bottleneck would need to be solved before quantum computers could outperform conventional supercomputers for these problems.</p>]]></content:encoded>
    </item>
    <item>
      <title>One Single Hub Text Breaks CLIP: Identifying Vulnerabilities in Cross-Modal Encoders via Hubness</title>
      <link>https://paperplaine.com/papers/one-single-hub-text-breaks-clip-identifying-vulnerabilities-in-cross-modal</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/one-single-hub-text-breaks-clip-identifying-vulnerabilities-in-cross-modal</guid>
      <pubDate>Thu, 30 Apr 2026 10:08:35 GMT</pubDate>
      <author>Hiroyuki Deguchi, Katsuki Chousa, Yusuke Sakai</author>
      <category>Comp Sci</category>
      <description>How a single confusing text can fool systems that match images to captions</description>
      <content:encoded><![CDATA[<p><em>How a single confusing text can fool systems that match images to captions</em></p><p>Researchers found a critical weakness in CLIP and similar image-text matching systems: a single generic piece of text can be artificially close to nearly every image in a dataset, tricking the system into giving it high similarity scores even when it's meaningless. This reveals that these widely-used systems rely on flawed geometry in their internal representation space, making them vulnerable to subtle manipulation.</p><p><strong>Why it matters:</strong> Image-to-text systems power real applications—from photo search to automated caption evaluation—and companies rely on them to be robust. This vulnerability means a single malicious or accidental hub text could poison search results or break evaluation metrics that measure whether AI-generated captions match human standards, undermining trust in systems used for content moderation, accessibility, and quality assurance.</p>]]></content:encoded>
    </item>
    <item>
      <title>Crab: A Semantics-Aware Checkpoint/Restore Runtime for Agent Sandboxes</title>
      <link>https://paperplaine.com/papers/crab-a-semantics-aware-checkpoint-restore-runtime-for-agent-sandboxes</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/crab-a-semantics-aware-checkpoint-restore-runtime-for-agent-sandboxes</guid>
      <pubDate>Thu, 30 Apr 2026 17:20:19 GMT</pubDate>
      <author>Tianyuan Wu, Chaokun Chang, Lunxi Cao et al.</author>
      <category>AI</category>
      <description>Saving computer resources by knowing when AI agents actually need backups</description>
      <content:encoded><![CDATA[<p><em>Saving computer resources by knowing when AI agents actually need backups</em></p><p>Most checkpoints of AI agent sandboxes are wasted because existing systems either skip important OS-level side effects or save state after every single action. Crab cuts checkpoint overhead by 87% by intelligently deciding which agent turns actually produce recoverable state—and achieves perfect recovery where naive chat-only approaches fail.</p><p><strong>Why it matters:</strong> AI agents running in sandboxed containers need frequent backups for fault tolerance and experimentation, but constant checkpointing tanks performance and costs. Crab lets companies run more agents on shared hardware at lower cost while maintaining the ability to recover from failures or rollback bad decisions—turning a system bottleneck into a nonissue.</p>]]></content:encoded>
    </item>
    <item>
      <title>Robust Constrained Optimization via Sliding Mode Control</title>
      <link>https://paperplaine.com/papers/robust-constrained-optimization-via-sliding-mode-control</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/robust-constrained-optimization-via-sliding-mode-control</guid>
      <pubDate>Thu, 30 Apr 2026 08:40:30 GMT</pubDate>
      <author>Shyam Kamal, Baby Diana, Sunidhi Pandey et al.</author>
      <category>Math</category>
      <description>A control-theory approach that solves optimization problems faster and under messy conditions</description>
      <content:encoded><![CDATA[<p><em>A control-theory approach that solves optimization problems faster and under messy conditions</em></p><p>Researchers developed a new method for solving constrained optimization problems—a common task in engineering and science—by borrowing techniques from control theory. The approach guarantees that constraints are satisfied exactly and reaches the optimal solution in finite time, even when the problem is non-convex or the system is buffeted by noise and disturbances.</p><p><strong>Why it matters:</strong> Most classical optimization methods assume clean data and ideal conditions, but real-world problems involve measurement errors, uncertainty, and unexpected disturbances. This framework solves that problem by building robustness directly into the method, allowing engineers and scientists to find good solutions reliably in noisy, uncertain environments—from robotics to power systems to machine learning.</p>]]></content:encoded>
    </item>
    <item>
      <title>A Real-time Scale-robust Network for Glottis Segmentation in Nasal Transnasal Intubation</title>
      <link>https://paperplaine.com/papers/a-real-time-scale-robust-network-for-glottis-segmentation-in-nasal-transnasal</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/a-real-time-scale-robust-network-for-glottis-segmentation-in-nasal-transnasal</guid>
      <pubDate>Thu, 30 Apr 2026 03:51:25 GMT</pubDate>
      <author>Yang Zhou, Chaoyong Zhang, Ruoyi Hao et al.</author>
      <category>Engineering</category>
      <description>AI that helps doctors see the airway clearly during breathing tube insertion</description>
      <content:encoded><![CDATA[<p><em>AI that helps doctors see the airway clearly during breathing tube insertion</em></p><p>Researchers developed a fast, lightweight artificial intelligence system that can reliably identify the glottis (the opening to the windpipe) during nasal intubation, even as it changes size dramatically throughout the procedure. The system achieved 92.9% accuracy while running on portable devices at over 170 frames per second, outperforming existing methods despite the challenging lighting and anatomical complexity of the procedure.</p><p><strong>Why it matters:</strong> Nasotracheal intubation is a critical procedure for maintaining patient airways, and real-time visual guidance reduces complications and speeds up the process. This technology enables hospitals to use AI assistance on standard equipment rather than specialized high-powered computers, making safer, faster intubations accessible in more clinical settings and emergency situations.</p>]]></content:encoded>
    </item>
    <item>
      <title>Claw-Eval-Live: A Live Agent Benchmark for Evolving Real-World Workflows</title>
      <link>https://paperplaine.com/papers/claw-eval-live-a-live-agent-benchmark-for-evolving-real-world-workflows</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/claw-eval-live-a-live-agent-benchmark-for-evolving-real-world-workflows</guid>
      <pubDate>Thu, 30 Apr 2026 17:23:19 GMT</pubDate>
      <author>Chenxin Li, Zhengyang Tang, Huangxin Lin et al.</author>
      <category>AI</category>
      <description>Testing AI agents on real work that keeps changing, not frozen task lists</description>
      <content:encoded><![CDATA[<p><em>Testing AI agents on real work that keeps changing, not frozen task lists</em></p><p>AI agents that work across software tools and business systems still struggle with everyday tasks—the best model tested only completed 67% of them. A new benchmark called Claw-Eval-Live tracks what people actually need done rather than relying on static task lists, and grades agents by checking whether they actually executed the work, not just whether they gave a good answer.</p><p><strong>Why it matters:</strong> Companies increasingly rely on AI agents to handle business workflows like HR tasks and spreadsheet repairs, but current benchmarks don't reflect the real, constantly changing demands these agents face. This benchmark reveals that workflow automation is nowhere near reliable enough for critical business work—and shows that models appearing equally capable on paper can perform very differently on actual tasks, which matters for deciding which AI system to trust with real work.</p>]]></content:encoded>
    </item>
    <item>
      <title>Modeling dependency between operational risk losses and macroeconomic variables using Hidden Markov Models</title>
      <link>https://paperplaine.com/papers/modeling-dependency-between-operational-risk-losses-and-macroeconomic-variables</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/modeling-dependency-between-operational-risk-losses-and-macroeconomic-variables</guid>
      <pubDate>Thu, 23 Apr 2026 14:38:51 GMT</pubDate>
      <author>Nikeethan Selvaratnam, Dorinel Bastide, Clément Fernandes et al.</author>
      <category>Finance</category>
      <description>Predicting when banks will suffer losses by tracking economic health</description>
      <content:encoded><![CDATA[<p><em>Predicting when banks will suffer losses by tracking economic health</em></p><p>Banks lose money unpredictably—and those losses often spike when the economy weakens. Researchers built a statistical model that tracks hidden economic states and uses them to forecast operational losses, showing that macroeconomic conditions like unemployment and interest rates do meaningfully predict when these costly failures will occur.</p><p><strong>Why it matters:</strong> Banks must set aside capital reserves for potential losses, and stress-testing requirements force them to model worst-case scenarios. A better prediction method could help regulators and banks estimate required reserves more accurately, avoiding either dangerously low buffers or wasteful overprovision. This affects lending capacity and ultimately how much credit flows to the real economy.</p>]]></content:encoded>
    </item>
    <item>
      <title>LLM as Clinical Graph Structure Refiner: Enhancing Representation Learning in EEG Seizure Diagnosis</title>
      <link>https://paperplaine.com/papers/llm-as-clinical-graph-structure-refiner-enhancing-representation-learning-in</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/llm-as-clinical-graph-structure-refiner-enhancing-representation-learning-in</guid>
      <pubDate>Thu, 30 Apr 2026 17:57:12 GMT</pubDate>
      <author>Lincan Li, Zheng Chen, Yushun Dong</author>
      <category>AI</category>
      <description>Using AI language models to clean up messy brain-wave data for seizure detection</description>
      <content:encoded><![CDATA[<p><em>Using AI language models to clean up messy brain-wave data for seizure detection</em></p><p>Researchers showed that large language models can improve how computers detect seizures from EEG brain scans by cleaning up noisy connections in data networks. Their two-stage approach first builds a graph of brain-signal relationships, then uses an LLM to remove false or redundant connections, achieving better detection accuracy and more interpretable results on standard medical datasets.</p><p><strong>Why it matters:</strong> Seizure detection is critical for patient safety, but EEG signals are notoriously noisy and hard to analyze accurately. This method improves detection reliability while making the underlying analysis transparent to doctors—important when machine learning decisions directly affect treatment decisions. The approach demonstrates a practical way to combine language models with medical AI, potentially accelerating similar improvements in other brain-imaging diagnostics.</p>]]></content:encoded>
    </item>
    <item>
      <title>PhyCo: Learning Controllable Physical Priors for Generative Motion</title>
      <link>https://paperplaine.com/papers/phyco-learning-controllable-physical-priors-for-generative-motion</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/phyco-learning-controllable-physical-priors-for-generative-motion</guid>
      <pubDate>Thu, 30 Apr 2026 17:53:03 GMT</pubDate>
      <author>Sriram Narayanan, Ziyu Jiang, Srinivasa Narasimhan et al.</author>
      <category>AI</category>
      <description>Teaching AI to generate videos where objects move and collide realistically</description>
      <content:encoded><![CDATA[<p><em>Teaching AI to generate videos where objects move and collide realistically</em></p><p>Video generation models can now create realistic motion and physics interactions—objects bounce properly, materials deform correctly, and friction behaves as expected—by training on 100,000+ simulated videos where physical properties are systematically varied. The system lets users control these physical attributes directly, without needing to reconstruct 3D geometry or run simulations after generation.</p><p><strong>Why it matters:</strong> Current video AI produces visually plausible but physically nonsensical motion: objects pass through each other, gravity works inconsistently, and materials respond wrongly to forces. PhyCo fixes this at generation time, which matters for video effects in film and games, robot training simulations, and any application where physical accuracy affects downstream decisions. Users can now specify exact friction or material properties and get videos that respect them automatically.</p>]]></content:encoded>
    </item>
    <item>
      <title>Intern-Atlas: A Methodological Evolution Graph as Research Infrastructure for AI Scientists</title>
      <link>https://paperplaine.com/papers/intern-atlas-a-methodological-evolution-graph-as-research-infrastructure-for-ai</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/intern-atlas-a-methodological-evolution-graph-as-research-infrastructure-for-ai</guid>
      <pubDate>Thu, 30 Apr 2026 17:44:55 GMT</pubDate>
      <author>Yujun Wu, Dongxu Zhang, Xinchen Li et al.</author>
      <category>AI</category>
      <description>Mapping how AI methods build on each other to help research agents learn faster</description>
      <content:encoded><![CDATA[<p><em>Mapping how AI methods build on each other to help research agents learn faster</em></p><p>Researchers created Intern-Atlas, a map of how artificial intelligence research methods have evolved and built upon one another across over 1 million papers. Unlike traditional citation networks that just link papers together, this map explicitly shows why and how new methods emerge from old ones, capturing the specific breakthroughs that prompt researchers to try different approaches.</p><p><strong>Why it matters:</strong> AI research agents—systems designed to help scientists by reading and synthesizing research—currently struggle to understand how methods are connected because that information is buried in text. Intern-Atlas gives them an explicit roadmap, making it possible for automated systems to suggest promising research directions or identify when a method is ready for a new application. This infrastructure could accelerate how quickly AI researchers iterate on ideas and help catch dead ends before humans invest time in them.</p>]]></content:encoded>
    </item>
    <item>
      <title>FlexiTac: A Low-Cost, Open-Source, Scalable Tactile Sensing Solution for Robotic Systems</title>
      <link>https://paperplaine.com/papers/flexitac-a-low-cost-open-source-scalable-tactile-sensing-solution-for-robotic</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/flexitac-a-low-cost-open-source-scalable-tactile-sensing-solution-for-robotic</guid>
      <pubDate>Thu, 30 Apr 2026 17:43:07 GMT</pubDate>
      <author>Binghao Huang, Yunzhu Li</author>
      <category>AI</category>
      <description>Cheap, shareable touch sensors that let robots feel what they grab</description>
      <content:encoded><![CDATA[<p><em>Cheap, shareable touch sensors that let robots feel what they grab</em></p><p>Researchers built FlexiTac, a low-cost tactile sensing system that gives robot hands the ability to detect pressure and texture through flexible sensor pads and simple electronics. The system costs far less than existing alternatives, works on different types of grippers, and can be manufactured quickly and consistently—making it practical for widespread use in robotics labs and industry.</p><p><strong>Why it matters:</strong> Robot dexterity has been held back by expensive, fragile touch sensors that few labs can afford or easily integrate into new designs. FlexiTac removes that barrier: its open-source design, low manufacturing cost, and plug-and-play setup mean more researchers can experiment with touch-based learning, and manufacturers can add sensitive manipulation to more types of robots. This could accelerate progress in tasks like assembly, sorting, and manipulation that currently require human workers.</p>]]></content:encoded>
    </item>
    <item>
      <title>Defending Quantum Classifiers against Adversarial Perturbations through Quantum Autoencoders</title>
      <link>https://paperplaine.com/papers/defending-quantum-classifiers-against-adversarial-perturbations-through-quantum</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/defending-quantum-classifiers-against-adversarial-perturbations-through-quantum</guid>
      <pubDate>Thu, 30 Apr 2026 17:56:40 GMT</pubDate>
      <author>Emma Andrews, Sahan Sanjaya, Prabhat Mishra</author>
      <category>Comp Sci</category>
      <description>Protecting quantum AI classifiers from sneaky adversarial tricks</description>
      <content:encoded><![CDATA[<p><em>Protecting quantum AI classifiers from sneaky adversarial tricks</em></p><p>Quantum machine learning systems that classify images can be fooled by specially crafted noise, just like regular AI systems. Researchers developed a defense using quantum autoencoders to clean up corrupted data before classification, improving accuracy by up to 68% under attack without needing to retrain the system on known threats.</p><p><strong>Why it matters:</strong> As quantum computers become practical tools for real tasks, securing them against adversarial attacks matters for any high-stakes application—medical imaging, security screening, or autonomous systems. This defense works without the overhead of constantly retraining on new attack types, making it more practical to deploy when attackers keep changing their tactics.</p>]]></content:encoded>
    </item>
    <item>
      <title>Strait: Perceiving Priority and Interference in ML Inference Serving</title>
      <link>https://paperplaine.com/papers/strait-perceiving-priority-and-interference-in-ml-inference-serving</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/strait-perceiving-priority-and-interference-in-ml-inference-serving</guid>
      <pubDate>Thu, 30 Apr 2026 17:55:28 GMT</pubDate>
      <author>Haidong Zhao, Nikolaos Georgantas</author>
      <category>Comp Sci</category>
      <description>Scheduling AI requests fairly when multiple tasks compete for GPU time</description>
      <content:encoded><![CDATA[<p><em>Scheduling AI requests fairly when multiple tasks compete for GPU time</em></p><p>Strait is a system for managing requests to machine learning models running on GPUs when some requests matter more than others. It predicts how long each request will take even when multiple requests run simultaneously, then uses those predictions to prioritize urgent requests—cutting missed deadlines for high-priority tasks by up to 11 percentage points without completely starving lower-priority work.</p><p><strong>Why it matters:</strong> Companies running AI services on their own hardware often need to handle both time-sensitive requests (like fraud detection) and routine ones (like recommendations) on the same machines. Current systems either guess badly at how long things will take under load or simply interrupt low-priority tasks—wasting GPU power. Strait lets businesses meet their critical deadlines while still processing regular work efficiently, making on-premises AI infrastructure more practical.</p>]]></content:encoded>
    </item>
    <item>
      <title>Mapping the Phase Diagram of the Vicsek Model with Machine Learning</title>
      <link>https://paperplaine.com/papers/mapping-the-phase-diagram-of-the-vicsek-model-with-machine-learning</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/mapping-the-phase-diagram-of-the-vicsek-model-with-machine-learning</guid>
      <pubDate>Thu, 30 Apr 2026 17:52:23 GMT</pubDate>
      <author>Grace T. Bai, Brandon B. Le</author>
      <category>Comp Sci</category>
      <description>Using AI to map where flocking behavior switches between chaos and order</description>
      <content:encoded><![CDATA[<p><em>Using AI to map where flocking behavior switches between chaos and order</em></p><p>Researchers used machine learning to chart the complete phase diagram of the Vicsek model—a mathematical model of how animals flock together—across its full parameter space. By training a neural network on simulated data, they achieved 92% accuracy in predicting when the system transitions between disordered, ordered, and mixed states, and revealed a previously unclear boundary region between ordered and chaotic behavior.</p><p><strong>Why it matters:</strong> Phase diagrams are critical maps in physics and biology that show where systems behave differently. This machine-learning approach turns expensive simulations into comprehensive maps that can predict behavior across untested regions, potentially accelerating research into real collective motion—from bird flocks to autonomous robot swarms—by replacing exhaustive simulations with trained algorithms.</p>]]></content:encoded>
    </item>
    <item>
      <title>Explainable Load Forecasting with Covariate-Informed Time Series Foundation Models</title>
      <link>https://paperplaine.com/papers/explainable-load-forecasting-with-covariate-informed-time-series-foundation</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/explainable-load-forecasting-with-covariate-informed-time-series-foundation</guid>
      <pubDate>Thu, 30 Apr 2026 17:36:24 GMT</pubDate>
      <author>Matthias Hertel, Alexandra Nikoltchovska, Sebastian Pütz et al.</author>
      <category>Comp Sci</category>
      <description>Making AI power grid forecasts understandable and trustworthy</description>
      <content:encoded><![CDATA[<p><em>Making AI power grid forecasts understandable and trustworthy</em></p><p>Researchers found that advanced AI models can predict electricity demand as accurately as traditional ones while remaining interpretable—a crucial requirement for critical infrastructure. By developing a method to explain which factors (weather, time of day, historical patterns) drive each prediction, they showed that these models reliably use the right information to make decisions, matching established expertise about what actually moves power consumption.</p><p><strong>Why it matters:</strong> Power grid operators need to understand *why* a forecast says demand will spike before they commit expensive resources. Black-box predictions, no matter how accurate, create operational risk and regulatory friction. This work proves that grid forecasting can be both cutting-edge and transparent, removing a major barrier to deploying faster, more efficient AI systems in electricity infrastructure.</p>]]></content:encoded>
    </item>
    <item>
      <title>Hypergraph independence bounds: from maximum degree to average degree</title>
      <link>https://paperplaine.com/papers/hypergraph-independence-bounds-from-maximum-degree-to-average-degree</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/hypergraph-independence-bounds-from-maximum-degree-to-average-degree</guid>
      <pubDate>Thu, 30 Apr 2026 15:58:27 GMT</pubDate>
      <author>Jing Yu, Junchi Zhang</author>
      <category>Math</category>
      <description>When sparse networks hide large independent sets, how dense ones must too</description>
      <content:encoded><![CDATA[<p><em>When sparse networks hide large independent sets, how dense ones must too</em></p><p>Mathematicians proved that if you can guarantee a certain minimum size of non-connected nodes in networks with a strict upper limit on connections per node, then the same guarantee automatically holds for networks with that same average connection level. The result bridges two different ways of measuring network sparsity and applies to hypergraphs—the generalization of networks where edges can connect more than two nodes at once.</p><p><strong>Why it matters:</strong> This theorem simplifies proofs across multiple network structures by eliminating the need to separately verify bounds under different sparsity conditions. Graph theorists and computer scientists studying network properties, coloring algorithms, and combinatorial optimization can now transfer known results between maximum-degree and average-degree settings, reducing redundant work and expanding what we know about when large independent sets must exist in sparse networks.</p>]]></content:encoded>
    </item>
    <item>
      <title>Extremal graphs for average size of maximal matchings in bicyclic graphs</title>
      <link>https://paperplaine.com/papers/extremal-graphs-for-average-size-of-maximal-matchings-in-bicyclic-graphs</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/extremal-graphs-for-average-size-of-maximal-matchings-in-bicyclic-graphs</guid>
      <pubDate>Thu, 30 Apr 2026 15:47:44 GMT</pubDate>
      <author>Kai Zhang</author>
      <category>Math</category>
      <description>Finding the graph shapes that give the smallest average matchings</description>
      <content:encoded><![CDATA[<p><em>Finding the graph shapes that give the smallest average matchings</em></p><p>Mathematicians determined the minimum possible average size of maximal matchings in bicyclic graphs — networks with exactly two cycles — and identified exactly which graph shape achieves this minimum. For any such graph with n vertices, the average matching size cannot drop below (4n−11)/(2n−5), with equality occurring only when two triangles share an edge and extra vertices hang off one corner.</p><p><strong>Why it matters:</strong> This completes a research program started years ago on matching problems in increasingly complex graphs. The methods used here — breaking down the problem by identifying which small matchings drive the minimum — create a template for solving similar extremal problems on other graph families, potentially accelerating progress on open questions in combinatorics.</p>]]></content:encoded>
    </item>
    <item>
      <title>Cliques in minimally globally rigid graphs</title>
      <link>https://paperplaine.com/papers/cliques-in-minimally-globally-rigid-graphs</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/cliques-in-minimally-globally-rigid-graphs</guid>
      <pubDate>Thu, 30 Apr 2026 15:16:48 GMT</pubDate>
      <author>Julien Portier</author>
      <category>Math</category>
      <description>Why the densest possible rigid structures must be complete and symmetric</description>
      <content:encoded><![CDATA[<p><em>Why the densest possible rigid structures must be complete and symmetric</em></p><p>Mathematicians have proven that certain rigid geometric structures—ones that can't be deformed without breaking their constraints—must actually be the simplest possible version if they contain a dense enough subgroup of connections. The finding confirms a 20-year-old prediction about how rigidity and connectivity relate in multidimensional space.</p><p><strong>Why it matters:</strong> This result helps engineers and mathematicians understand the boundaries between minimal rigidity and redundancy. In applications like robot design, mechanical linkages, and structural analysis, knowing exactly when a structure must be completely symmetric versus when it can be sparser tells engineers how much flexibility they have in their designs without sacrificing stability.</p>]]></content:encoded>
    </item>
    <item>
      <title>Semidefinite and linear programming bounds for sum-rank-metric codes and non-existence results</title>
      <link>https://paperplaine.com/papers/semidefinite-and-linear-programming-bounds-for-sum-rank-metric-codes-and-non</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/semidefinite-and-linear-programming-bounds-for-sum-rank-metric-codes-and-non</guid>
      <pubDate>Thu, 30 Apr 2026 14:17:22 GMT</pubDate>
      <author>Aida Abiad, Antonina P. Khramova, Sven C. Polak et al.</author>
      <category>Math</category>
      <description>Finding the limits of codes that protect data sent across networks</description>
      <content:encoded><![CDATA[<p><em>Finding the limits of codes that protect data sent across networks</em></p><p>Researchers developed new mathematical tools to determine the maximum size of error-correcting codes designed for modern communication systems like distributed storage and network coding. Using optimization techniques including semidefinite programming, they found sharper upper limits on code size than previous methods and proved that certain theoretically perfect codes cannot actually exist.</p><p><strong>Why it matters:</strong> Error-correcting codes are fundamental to reliable data transmission—from cloud storage to wireless communications. These tighter bounds help engineers understand what's theoretically possible and avoid wasting resources searching for codes that don't exist, while the new optimization methods could improve the design of more efficient communication systems.</p>]]></content:encoded>
    </item>
    <item>
      <title>Simulating Infant First-Person Sensorimotor Experience via Motion Retargeting from Babies to Humanoids</title>
      <link>https://paperplaine.com/papers/simulating-infant-first-person-sensorimotor-experience-via-motion-retargeting</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/simulating-infant-first-person-sensorimotor-experience-via-motion-retargeting</guid>
      <pubDate>Thu, 30 Apr 2026 08:37:46 GMT</pubDate>
      <author>Francisco M. López, Hoshinori Kanazawa, Ondrej Fiala et al.</author>
      <category>Biology</category>
      <description>Using robots to recreate what babies actually feel and sense while moving</description>
      <content:encoded><![CDATA[<p><em>Using robots to recreate what babies actually feel and sense while moving</em></p><p>Researchers developed a method to translate infant movements from videos onto humanoid robots and virtual models, recreating not just the motion but also the sensory feedback—touch, muscle awareness, and visual input—that babies experience. The technique reconstructs a baby's full 3D body position from a single video, then maps those movements onto different robot platforms with sub-centimeter accuracy, generating realistic streams of multimodal sensory data.</p><p><strong>Why it matters:</strong> Scientists can now study how babies develop motor skills by literally experiencing movement through a robot's sensors, rather than just watching from the outside. This opens new ways to detect early signs of developmental disorders, helps roboticists design machines that learn more like humans do, and gives developmental psychologists direct access to the sensory world of infancy—something previously impossible to measure or replicate.</p>]]></content:encoded>
    </item>
    <item>
      <title>A geometry aware framework enhances noninvasive mapping of whole human brain dynamics</title>
      <link>https://paperplaine.com/papers/a-geometry-aware-framework-enhances-noninvasive-mapping-of-whole-human-brain</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/a-geometry-aware-framework-enhances-noninvasive-mapping-of-whole-human-brain</guid>
      <pubDate>Tue, 28 Apr 2026 12:58:56 GMT</pubDate>
      <author>Song Wang, Kexin Lou, Chen Wei et al.</author>
      <category>Biology</category>
      <description>Using brain shape to map electrical signals more accurately across the whole brain</description>
      <content:encoded><![CDATA[<p><em>Using brain shape to map electrical signals more accurately across the whole brain</em></p><p>A new method called Geometric Basis Functions uses each person's unique brain shape to better pinpoint where electrical activity originates during EEG and MEG scans. The technique works by breaking down the brain's surface into natural geometric patterns and combining them to reconstruct neural activity, and tests show it achieves higher accuracy than existing approaches across multiple types of brain data.</p><p><strong>Why it matters:</strong> Current brain imaging methods often place neural activity in the wrong location or require oversimplified assumptions about how the brain is organized. This approach leverages individual brain anatomy to make non-invasive scans more precise, which could improve diagnosis of conditions like epilepsy and strengthen neuroscience research by capturing faster, more detailed maps of how different brain regions communicate.</p>]]></content:encoded>
    </item>
    <item>
      <title>One-shot emergency psychiatric triage across 15 frontier AI chatbots</title>
      <link>https://paperplaine.com/papers/one-shot-emergency-psychiatric-triage-across-15-frontier-ai-chatbots</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/one-shot-emergency-psychiatric-triage-across-15-frontier-ai-chatbots</guid>
      <pubDate>Tue, 28 Apr 2026 09:25:41 GMT</pubDate>
      <author>Veith Weilnhammer, Lennart Luettgau, Christopher Summerfield et al.</author>
      <category>Biology</category>
      <description>Do AI chatbots correctly identify psychiatric emergencies in one message?</description>
      <content:encoded><![CDATA[<p><em>Do AI chatbots correctly identify psychiatric emergencies in one message?</em></p><p>AI chatbots almost never miss true psychiatric emergencies—correctly flagging 94% of crisis cases for immediate care. But they frequently over-triage less urgent situations, incorrectly labeling routine or moderately concerning messages as needing faster response than they actually do.</p><p><strong>Why it matters:</strong> As people increasingly turn to chatbots for mental health guidance, this gap matters in opposite ways: the systems are reliable safety nets that won't let genuine crises slip through unnoticed, but they may also overwhelm emergency services and create unnecessary anxiety by treating normal distress as a crisis. Better calibration could preserve the protective function while reducing false alarms.</p>]]></content:encoded>
    </item>
    <item>
      <title>Independent-Component-Based Encoding Models of Brain Activity During Story Comprehension</title>
      <link>https://paperplaine.com/papers/independent-component-based-encoding-models-of-brain-activity-during-story</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/independent-component-based-encoding-models-of-brain-activity-during-story</guid>
      <pubDate>Mon, 27 Apr 2026 19:30:46 GMT</pubDate>
      <author>Kamya Hari, Taha Binhuraib, Jin Li et al.</author>
      <category>Biology</category>
      <description>Finding the brain&#39;s consistent story-processing networks despite individual differences</description>
      <content:encoded><![CDATA[<p><em>Finding the brain's consistent story-processing networks despite individual differences</em></p><p>Researchers developed a new way to map how brain networks respond to stories by filtering out noise and individual variation in brain anatomy. Rather than analyzing individual pixels of brain scans, they identified independent functional networks and found that certain networks—like those for hearing and language—reliably respond to linguistic features of stories across different people, with their predictions confirmed by known acoustic properties.</p><p><strong>Why it matters:</strong> Brain imaging studies often struggle because each person's brain is wired slightly differently, making it hard to draw general conclusions. This method cuts through that noise to identify which brain networks actually respond to language, regardless of where those networks sit in each individual's head. That makes it easier for neuroscientists to compare results across studies and build more accurate models of how we understand language and stories.</p>]]></content:encoded>
    </item>
    <item>
      <title>The Financialization of Proof-of-Stake: Asymptotic Centralization under Exogenous Risk Premiums</title>
      <link>https://paperplaine.com/papers/the-financialization-of-proof-of-stake-asymptotic-centralization-under</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/the-financialization-of-proof-of-stake-asymptotic-centralization-under</guid>
      <pubDate>Tue, 28 Apr 2026 19:40:04 GMT</pubDate>
      <author>Mikhail Perepelitsa</author>
      <category>Finance</category>
      <description>Why cryptocurrency staking inevitably concentrates power among the wealthy</description>
      <content:encoded><![CDATA[<p><em>Why cryptocurrency staking inevitably concentrates power among the wealthy</em></p><p>When external financial markets offer better returns than cryptocurrency staking rewards, wealthy investors flood into staking anyway, driving yields toward zero and forcing ordinary users out of the system entirely. A mathematical model shows this centralization is not a temporary problem but an inevitable long-term outcome of how Proof-of-Stake networks interact with traditional finance.</p><p><strong>Why it matters:</strong> Proof-of-Stake cryptocurrencies like Ethereum were designed to be more democratic than older mining-based systems, but this research suggests the opposite happens at scale: wealth and control concentrate in fewer hands. If true, it undermines a core promise of these networks—that ordinary people can participate meaningfully in securing and governing them.</p>]]></content:encoded>
    </item>
    <item>
      <title>An Explicit Solution to Black-Scholes Implied Volatility</title>
      <link>https://paperplaine.com/papers/an-explicit-solution-to-black-scholes-implied-volatility</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/an-explicit-solution-to-black-scholes-implied-volatility</guid>
      <pubDate>Mon, 27 Apr 2026 13:46:58 GMT</pubDate>
      <author>Wolfgang Schadner</author>
      <category>Finance</category>
      <description>A direct formula solves a half-century puzzle in options trading</description>
      <content:encoded><![CDATA[<p><em>A direct formula solves a half-century puzzle in options trading</em></p><p>Researchers have derived the first explicit mathematical formula for implied volatility in the Black-Scholes model, a central calculation in options markets that previously required iterative trial-and-error methods. The solution recognizes that option prices follow a hidden probability pattern, which can be inverted to read off volatility directly from market prices. The new formula runs 3.4 times faster than current best methods while matching machine precision.</p><p><strong>Why it matters:</strong> Options traders and risk managers calculate implied volatility thousands of times per day—it's how they price contracts and manage portfolios. Replacing slow iterative methods with a direct calculation could speed up trading systems, reduce computational costs, and lower latency in high-frequency markets where milliseconds matter. The breakthrough also settles a mathematical question that has persisted since the Black-Scholes model became standard in 1973.</p>]]></content:encoded>
    </item>
    <item>
      <title>The Anatomy of a Decentralized Prediction Market: Microstructure Evidence from the Polymarket Order Book</title>
      <link>https://paperplaine.com/papers/the-anatomy-of-a-decentralized-prediction-market-microstructure-evidence-from</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/the-anatomy-of-a-decentralized-prediction-market-microstructure-evidence-from</guid>
      <pubDate>Mon, 27 Apr 2026 12:01:14 GMT</pubDate>
      <author>Philipp D. Dubach</author>
      <category>Finance</category>
      <description>How prediction market orders flow when nobody&#39;s really watching closely</description>
      <content:encoded><![CDATA[<p><em>How prediction market orders flow when nobody's really watching closely</em></p><p>A detailed examination of Polymarket, the largest blockchain-based prediction market, reveals that its order book looks nothing like traditional financial markets—with unusual spreads, a different pattern of available liquidity, and surprisingly little self-dealing. The most striking finding: inferring who bought and who sold from public data works only 59% of the time, barely better than a coin flip, forcing researchers to use hidden on-chain records instead.</p><p><strong>Why it matters:</strong> Prediction markets are growing as a tool for forecasting everything from elections to climate outcomes, but we know almost nothing about how they actually work. This research documents Polymarket's plumbing in detail—revealing where the standard playbook from stock markets fails and where it holds. For anyone building a competing platform, trading on these markets, or relying on their price signals for real decisions, knowing what data you can actually trust matters enormously.</p>]]></content:encoded>
    </item>
    <item>
      <title>Non-unique time and market incompleteness</title>
      <link>https://paperplaine.com/papers/non-unique-time-and-market-incompleteness</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/non-unique-time-and-market-incompleteness</guid>
      <pubDate>Sun, 26 Apr 2026 08:48:12 GMT</pubDate>
      <author>Chris Angstmann, Tim Gebbie</author>
      <category>Finance</category>
      <description>Why financial markets don&#39;t tick to a single global clock</description>
      <content:encoded><![CDATA[<p><em>Why financial markets don't tick to a single global clock</em></p><p>Financial markets don't operate on synchronized time the way traditional models assume. Instead, trading happens in random bursts tied to actual events—a buy order here, a sell order there—creating multiple valid ways to describe market time. This reveals a deeper kind of market incompleteness than economists usually discuss: the gap between the real time traders operate in and the theoretical time pricing models use.</p><p><strong>Why it matters:</strong> Traders and risk managers currently juggle two different clocks—one for actual trades and one for theoretical pricing—and this mismatch can hide real risks, especially during fast trading or market stress. Recognizing that market time is fundamentally non-unique doesn't break existing tools, but it explains why they sometimes fail at high frequencies and suggests when simpler, lower-frequency models might be more reliable for managing money and hedging positions.</p>]]></content:encoded>
    </item>
    <item>
      <title>Prediction-powered Inference by Mixture of Experts</title>
      <link>https://paperplaine.com/papers/prediction-powered-inference-by-mixture-of-experts</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/prediction-powered-inference-by-mixture-of-experts</guid>
      <pubDate>Thu, 30 Apr 2026 14:08:17 GMT</pubDate>
      <author>Yanwu Gu, Linglong Kong, Dong Xia</author>
      <category>Statistics</category>
      <description>Combining multiple AI predictions to squeeze more insight from limited labeled data</description>
      <content:encoded><![CDATA[<p><em>Combining multiple AI predictions to squeeze more insight from limited labeled data</em></p><p>When you have multiple AI prediction tools available but limited labeled data to work with, treating them as a mixture of experts can reduce statistical uncertainty and improve inference. The method automatically figures out which predictors are most reliable and weights them accordingly, delivering tighter confidence intervals than using predictions alone.</p><p><strong>Why it matters:</strong> In fields like medicine, finance, and environmental monitoring, obtaining ground-truth labels is costly or time-consuming. This framework lets organizations leverage multiple off-the-shelf AI models they already have, extracting more reliable statistical conclusions from the labeled data they can afford to collect. The guaranteed best-expert performance means the approach never does worse than just using a single good predictor.</p>]]></content:encoded>
    </item>
    <item>
      <title>Decoupled Descent: Exact Test Error Tracking Via Approximate Message Passing</title>
      <link>https://paperplaine.com/papers/decoupled-descent-exact-test-error-tracking-via-approximate-message-passing</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/decoupled-descent-exact-test-error-tracking-via-approximate-message-passing</guid>
      <pubDate>Thu, 30 Apr 2026 14:01:23 GMT</pubDate>
      <author>Max Lovig</author>
      <category>Statistics</category>
      <description>A training method that predicts test performance without wasting data on validation</description>
      <content:encoded><![CDATA[<p><em>A training method that predicts test performance without wasting data on validation</em></p><p>Machine learning models trained on data gradually become overfit, causing their performance on training data to look better than it actually is on new data. Researchers developed a new training algorithm called decoupled descent that cancels out this bias as it trains, allowing the training error to accurately predict test performance without setting aside data for validation—using 100% of available data while still knowing how well the model will perform.</p><p><strong>Why it matters:</strong> Current machine learning practice forces a choice: either waste 10–20% of your data on a validation set to estimate real performance, or train blindly and risk deploying an overfit model. This algorithm could eliminate that trade-off, letting practitioners use all their data while still getting reliable estimates of how their model will perform in the real world. The method was tested on image classification tasks and consistently narrowed the gap between training and test performance compared to standard training approaches.</p>]]></content:encoded>
    </item>
    <item>
      <title>Linear-Core Surrogates: Smooth Loss Functions with Linear Rates for Classification and Structured Prediction</title>
      <link>https://paperplaine.com/papers/linear-core-surrogates-smooth-loss-functions-with-linear-rates-for</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/linear-core-surrogates-smooth-loss-functions-with-linear-rates-for</guid>
      <pubDate>Thu, 30 Apr 2026 11:32:25 GMT</pubDate>
      <author>Mehryar Mohri, Yutao Zhong</author>
      <category>Statistics</category>
      <description>Combining fast training with accurate predictions in machine learning</description>
      <content:encoded><![CDATA[<p><em>Combining fast training with accurate predictions in machine learning</em></p><p>Researchers created a new loss function called Linear-Core Surrogates that solves a longstanding trade-off in machine learning: smooth functions train quickly but learn slowly, while sharp functions learn efficiently but are hard to optimize. The new approach combines both benefits—it's smooth enough to train fast, yet produces predictions as accurate as harder-to-optimize functions. In structured prediction tasks like language processing, the smoothness enables a 23-fold speedup over existing methods.</p><p><strong>Why it matters:</strong> Training machine learning models is expensive in both time and computational energy. This approach cuts training time dramatically—by 23× on large text tasks—without sacrificing accuracy. It also handles messy real-world data better: when labels contain errors, the method outperforms standard approaches by 2.6% on standard benchmarks, making it immediately useful for practitioners working with imperfect datasets.</p>]]></content:encoded>
    </item>
    <item>
      <title>Mind the Gap: Structure-Aware Consistency in Preference Learning</title>
      <link>https://paperplaine.com/papers/mind-the-gap-structure-aware-consistency-in-preference-learning</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/mind-the-gap-structure-aware-consistency-in-preference-learning</guid>
      <pubDate>Thu, 30 Apr 2026 11:24:04 GMT</pubDate>
      <author>Mehryar Mohri, Yutao Zhong</author>
      <category>Statistics</category>
      <description>Why standard AI alignment methods lack mathematical guarantees of success</description>
      <content:encoded><![CDATA[<p><em>Why standard AI alignment methods lack mathematical guarantees of success</em></p><p>Current methods for aligning AI chatbots with human preferences, including the popular DPO technique, lack mathematical proof that they actually work as intended. The authors show that these methods can fail silently—appearing to work during training but producing unreliable behavior in real use—and propose a new approach (SA-DPO) that adds semantic-aware safety margins to restore theoretical guarantees.</p><p><strong>Why it matters:</strong> As AI systems become more powerful and are deployed for high-stakes decisions, knowing whether alignment methods actually work is critical. This work provides a way to verify that an AI system trained to follow human preferences will genuinely do so, rather than discovering failures after deployment. The new method is especially useful for handling tricky cases where multiple different responses are equally correct—a common problem in real-world AI alignment.</p>]]></content:encoded>
    </item>
    <item>
      <title>CRS-LLM: Cooperative Beam Prediction with a GPT-Style Backbone and Switch-Gated Fusion</title>
      <link>https://paperplaine.com/papers/crs-llm-cooperative-beam-prediction-with-a-gpt-style-backbone-and-switch-gated</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/crs-llm-cooperative-beam-prediction-with-a-gpt-style-backbone-and-switch-gated</guid>
      <pubDate>Thu, 30 Apr 2026 14:43:25 GMT</pubDate>
      <author>Fangzhi Li, Cunhua Pan, Hong Ren et al.</author>
      <category>Engineering</category>
      <description>Teaching AI to pick the right cell tower and antenna direction for fast-moving vehicles</description>
      <content:encoded><![CDATA[<p><em>Teaching AI to pick the right cell tower and antenna direction for fast-moving vehicles</em></p><p>Researchers developed a system that predicts which cell tower and antenna beam a moving vehicle should use by treating it as a single decision rather than two separate choices. The method outperformed existing approaches across different signal strengths and showed it could work with limited training data or even transfer to new situations without retraining.</p><p><strong>Why it matters:</strong> As vehicles move faster and need stronger wireless signals, current methods that pick a tower first and then an antenna direction often fail when conditions change abruptly—causing dropped connections and wasted attempts. By making both choices at once, this system cuts errors significantly, which means smoother video calls, faster downloads, and more reliable communication for autonomous vehicles and connected cars in real-world driving conditions.</p>]]></content:encoded>
    </item>
    <item>
      <title>Flying by Inference: Active Inference World Models for Adaptive UAV Swarms</title>
      <link>https://paperplaine.com/papers/flying-by-inference-active-inference-world-models-for-adaptive-uav-swarms</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/flying-by-inference-active-inference-world-models-for-adaptive-uav-swarms</guid>
      <pubDate>Thu, 30 Apr 2026 14:34:31 GMT</pubDate>
      <author>Kaleem Arshid, Ali Krayani, Lucio Marcenaro et al.</author>
      <category>Engineering</category>
      <description>Teaching drone swarms to plan and adapt like human experts</description>
      <content:encoded><![CDATA[<p><em>Teaching drone swarms to plan and adapt like human experts</em></p><p>Researchers created a system that lets teams of flying drones learn how to plan their missions by watching expert demonstrations, then adapt on the fly without recalculating everything from scratch. The approach compressed a computationally expensive planning problem into a learnable probabilistic model, allowing swarms to handle real-world uncertainties like measurement noise and unexpected obstacles more smoothly than existing learning-based methods.</p><p><strong>Why it matters:</strong> Autonomous drone swarms currently struggle to replan quickly when conditions change—recalculating optimal paths for multiple aircraft takes too long for real-time response. This method lets swarms make smart tactical adjustments instantly by comparing their current situation to what an expert would do, making coordinated multi-drone operations practical for time-sensitive tasks like emergency response or search and rescue.</p>]]></content:encoded>
    </item>
    <item>
      <title>On the Fractional Fourier Transform for FMCW Radar Interference Mitigation</title>
      <link>https://paperplaine.com/papers/on-the-fractional-fourier-transform-for-fmcw-radar-interference-mitigation</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/on-the-fractional-fourier-transform-for-fmcw-radar-interference-mitigation</guid>
      <pubDate>Thu, 30 Apr 2026 12:00:46 GMT</pubDate>
      <author>Christian Oswald, Josef Kulmer, Franz Pernkopf</author>
      <category>Engineering</category>
      <description>Cleaning up radar signals when multiple sensors interfere with each other</description>
      <content:encoded><![CDATA[<p><em>Cleaning up radar signals when multiple sensors interfere with each other</em></p><p>When multiple FMCW radars operate near each other, their signals interfere and create false readings. Researchers developed a faster mathematical approach using the fractional Fourier transform that removes this interference, can handle multiple conflicting signals at once, and works on real radar equipment in actual environments.</p><p><strong>Why it matters:</strong> FMCW radars are used in autonomous vehicles, collision avoidance systems, and industrial sensing—all applications where multiple radars operate in close proximity. Interference causes missed detections and ghost objects, creating safety risks. A practical method to eliminate this interference without expensive hardware upgrades means existing radar systems can work reliably in crowded electromagnetic environments.</p>]]></content:encoded>
    </item>
    <item>
      <title>Bitwise Over-Parameterized Neural Polar Decoding: A Theoretical Performance Analysis</title>
      <link>https://paperplaine.com/papers/bitwise-over-parameterized-neural-polar-decoding-a-theoretical-performance</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/bitwise-over-parameterized-neural-polar-decoding-a-theoretical-performance</guid>
      <pubDate>Thu, 30 Apr 2026 10:28:09 GMT</pubDate>
      <author>Hongzhi Zhu, Wei Xu, Xiaohu You</author>
      <category>Engineering</category>
      <description>Teaching neural networks to decode wireless signals more reliably</description>
      <content:encoded><![CDATA[<p><em>Teaching neural networks to decode wireless signals more reliably</em></p><p>Researchers developed a neural network decoder for polar codes (a type of error-correcting code used in wireless communications) and proved theoretically how well it works. The key finding: making the neural network wider—giving it more internal computing capacity—consistently improves its ability to recover transmitted messages from noisy signals, and the paper shows exactly why and how much.</p><p><strong>Why it matters:</strong> Polar codes are used in 5G networks to transmit data reliably over wireless channels. Traditional decoders are fast but have performance limits; neural network decoders can do better but were a black box. This work removes the guesswork by mathematically proving how neural decoders perform and how to build them properly, enabling engineers to design faster, more reliable wireless systems with confidence.</p>]]></content:encoded>
    </item>
    <item>
      <title>Electricity price forecasting across Norway&#39;s five bidding zones in the post-crisis era</title>
      <link>https://paperplaine.com/papers/electricity-price-forecasting-across-norway-s-five-bidding-zones-in-the-post</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/electricity-price-forecasting-across-norway-s-five-bidding-zones-in-the-post</guid>
      <pubDate>Wed, 29 Apr 2026 13:02:02 GMT</pubDate>
      <author>My Thi Diem Phan, Trung Tuyen Truong, Hoai Phuong Ha et al.</author>
      <category>Economics</category>
      <description>Predicting electricity prices when market conditions have dramatically shifted</description>
      <content:encoded><![CDATA[<p><em>Predicting electricity prices when market conditions have dramatically shifted</em></p><p>When Norway's electricity market was hit by the 2021–2022 energy crisis and closer ties to Continental Europe, old forecasting models stopped working reliably. Researchers tested eight different forecasting approaches across Norway's five bidding zones and found that a machine learning method called LightGBM performed best, achieving error margins of 1.64 to 5.74 EUR per megawatt-hour—but surprisingly, simpler models using just past prices and calendar dates came close. The key insight: external factors like reservoir levels and gas prices matter less for accuracy in normal times, but become essential for predicting how far off forecasts will be when markets get stressed.</p><p><strong>Why it matters:</strong> Norway's electricity traders, grid operators, and energy companies rely on accurate price forecasts to make buying and selling decisions worth millions of euros daily. The old models trained on pre-crisis data were giving them false confidence in their predictions. This research provides updated benchmarks that work across all five zones, and shows traders which models and feature combinations to trust—and critically, when those models are likely to fail. The finding that simpler models work just as well in routine conditions could save companies from overcomplicating their systems, while the warning about stressed regimes gives decision makers a concrete signal for when to add extra caution to their bets.</p>]]></content:encoded>
    </item>
    <item>
      <title>What Drives Contagion? Identifying and Attributing Cross-Border Transmission Mechanisms</title>
      <link>https://paperplaine.com/papers/what-drives-contagion-identifying-and-attributing-cross-border-transmission</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/what-drives-contagion-identifying-and-attributing-cross-border-transmission</guid>
      <pubDate>Wed, 29 Apr 2026 11:25:35 GMT</pubDate>
      <author>Avishek Bhandari, Ipsita Parida, Hitesh Kumar Sahu</author>
      <category>Economics</category>
      <description>How financial shocks spread across countries—and which route they take</description>
      <content:encoded><![CDATA[<p><em>How financial shocks spread across countries—and which route they take</em></p><p>When stock markets in one country crash, others often follow, but researchers didn't know exactly how the damage spreads. This study traced contagion across 18 major economies from 2006 to 2026 and found that trade links, financial connections, and behavioral panic each play different roles depending on which crisis is happening. During the 2008 financial crisis, trade accounted for 28% of spillovers, while financial channels dominated earlier calm periods.</p><p><strong>Why it matters:</strong> Policymakers trying to firewall their economies from global financial shocks need to know which transmission routes matter most in each type of crisis. Trade restrictions might help in some scenarios but miss the real danger in others. This framework reveals which channel to target, potentially saving governments from deploying expensive or ineffective crisis responses. The method also surfaces when the evidence is genuinely uncertain—transparency the researchers say is missing from most contagion research.</p>]]></content:encoded>
    </item>
    <item>
      <title>Marshall meets Bartik: Revisiting the mysteries of the trade</title>
      <link>https://paperplaine.com/papers/marshall-meets-bartik-revisiting-the-mysteries-of-the-trade</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/marshall-meets-bartik-revisiting-the-mysteries-of-the-trade</guid>
      <pubDate>Wed, 29 Apr 2026 09:13:04 GMT</pubDate>
      <author>Yasusada Murata, Ryo Nakajima</author>
      <category>Economics</category>
      <description>How talented inventors moving to your city make everyone more creative</description>
      <content:encoded><![CDATA[<p><em>How talented inventors moving to your city make everyone more creative</em></p><p>When top inventors move into a region, local inventors become significantly more productive — even those who don't work together or share companies. This reveals that innovative ideas spread through the air in ways that can't be fully contained, suggesting that knowledge acts more like weather than property. The researchers found that state tax differences distort where inventive talent concentrates, reshaping innovation patterns across the country.</p><p><strong>Why it matters:</strong> States and cities compete fiercely to attract top talent through tax breaks and subsidies, betting that star inventors will boost local innovation. This research shows those bets are grounded in real effects — but also reveals a hidden cost: tax-driven clustering means inventive activity ends up in the wrong places, leaving other regions less innovative than they'd naturally be. Understanding these spillovers could help policymakers design smarter incentives that benefit entire regions rather than just chasing individual winners.</p>]]></content:encoded>
    </item>
    <item>
      <title>The Reservation Inflation of Hard Money: Gold-Standard Deflation and the Real Expansion of Nominal Claims, 1873-1896</title>
      <link>https://paperplaine.com/papers/the-reservation-inflation-of-hard-money-gold-standard-deflation-and-the-real</link>
      <guid isPermaLink="true">https://paperplaine.com/papers/the-reservation-inflation-of-hard-money-gold-standard-deflation-and-the-real</guid>
      <pubDate>Wed, 29 Apr 2026 03:10:23 GMT</pubDate>
      <author>Ran Huang</author>
      <category>Economics</category>
      <description>Why deflation can still inflate the real value of debt</description>
      <content:encoded><![CDATA[<p><em>Why deflation can still inflate the real value of debt</em></p><p>During the late 1800s gold standard, prices fell sharply in Britain and the US—yet the real value of fixed debts and financial claims rose dramatically. Between 1873 and 1896, British prices dropped 18% while the actual purchasing power of debt obligations climbed 22%. This shows that hard money constrains one type of inflation while unleashing another: deflation makes debts heavier, even as it makes goods cheaper.</p><p><strong>Why it matters:</strong> This reshapes how we think about monetary policy and economic stability. It suggests that tying currency to gold doesn't eliminate inflationary pressure—it redirects it toward savers and creditors at the expense of borrowers and workers. During deflationary periods, farms and businesses carrying fixed debts face mounting real obligations even as revenues shrink, which may explain why the 1873–1896 era sparked widespread farmer unrest and political upheaval despite falling prices.</p>]]></content:encoded>
    </item>
  </channel>
</rss>