<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://tukamilano.github.io/automated-theory-construction-lean/feed.xml" rel="self" type="application/atom+xml" /><link href="https://tukamilano.github.io/automated-theory-construction-lean/" rel="alternate" type="text/html" /><updated>2026-04-01T17:16:03+09:00</updated><id>https://tukamilano.github.io/automated-theory-construction-lean/feed.xml</id><title type="html">Automated Theory Construction</title><subtitle>Research notes and posts for the Automated Theory Construction project.</subtitle><entry><title type="html">Progress Update</title><link href="https://tukamilano.github.io/automated-theory-construction-lean/notes/progress/draft/2026/03/29/progress-update.html" rel="alternate" type="text/html" title="Progress Update" /><published>2026-03-29T21:00:00+09:00</published><updated>2026-03-29T21:00:00+09:00</updated><id>https://tukamilano.github.io/automated-theory-construction-lean/notes/progress/draft/2026/03/29/progress-update</id><content type="html" xml:base="https://tukamilano.github.io/automated-theory-construction-lean/notes/progress/draft/2026/03/29/progress-update.html"><![CDATA[<p>The implementation of the features I had planned is now largely complete. Going forward, I will continue making incremental improvements, while shifting the main focus toward exploring how this system can be applied in broader contexts.</p>

<p>Improving the system to discover and verify more complex proofs would require a substantial increase in inference resources. Given this constraint, I plan to prioritize expanding its range of applications rather than pushing purely on raw proof complexity for now.</p>

<h2 id="parallelization">Parallelization</h2>

<p>To improve throughput and avoid bottlenecks from slow sessions, I introduced a parallel execution scheme:</p>

<ul>
  <li>Problems are taken from <code class="language-plaintext highlighter-rouge">open_problems</code> and processed through the pipeline
<em>(formalization of the statement → natural language proof → formal proof → expansion)</em>
with up to <em>n</em> problems running concurrently.</li>
  <li>The system does not wait for slower sessions; it proceeds with other available problems.</li>
  <li>The <code class="language-plaintext highlighter-rouge">main_theorem_session</code> runs in parallel as a dedicated single slot, separate from the <em>n</em> slots for <code class="language-plaintext highlighter-rouge">open_problems</code>.</li>
</ul>

<p>This allows the system to maintain steady progress without being blocked by particularly difficult instances.</p>

<h2 id="expanding-applications">Expanding Applications</h2>

<p>Originally, this repository aimed to construct theories in unexplored domains. However, this raises a natural question:
<em>what is the value of building theories in areas that humans themselves do not yet understand?</em></p>

<p>Rather than viewing this as a limitation, I now interpret it more positively. The system can be seen as a tool that expands a researcher’s imagination when they conceive a new, unexplored theme.</p>

<p>In this sense, the goal is not merely to target niche or underexplored fields, but to support the early-stage development of entirely new research directions.</p>

<p>As a concrete step in this direction, I am interested in collaborating with researchers working on:</p>

<ul>
  <li>a comprehensive analysis of the expressive power of individual rules in combinatory categorial grammar, and</li>
  <li>a structural understanding of the landscape surrounding mildly context-sensitive grammars.</li>
</ul>

<p>I believe this system has the potential to accelerate such investigations by systematically generating and organizing candidate statements and their formal properties.</p>

<h2 id="outlook">Outlook</h2>

<p>I will continue exploring new application domains where this framework can contribute meaningfully, while refining the system to better support theory construction workflows.</p>]]></content><author><name></name></author><category term="notes" /><category term="progress" /><category term="draft" /><summary type="html"><![CDATA[The implementation of the features I had planned is now largely complete. Going forward, I will continue making incremental improvements, while shifting the main focus toward exploring how this system can be applied in broader contexts.]]></summary></entry><entry><title type="html">Progress Update</title><link href="https://tukamilano.github.io/automated-theory-construction-lean/notes/progress/draft/2026/03/27/progress-update.html" rel="alternate" type="text/html" title="Progress Update" /><published>2026-03-27T21:00:00+09:00</published><updated>2026-03-27T21:00:00+09:00</updated><id>https://tukamilano.github.io/automated-theory-construction-lean/notes/progress/draft/2026/03/27/progress-update</id><content type="html" xml:base="https://tukamilano.github.io/automated-theory-construction-lean/notes/progress/draft/2026/03/27/progress-update.html"><![CDATA[<h1 id="main-loop-session-and-search-policy-for-automated-theory-construction">Main Loop Session and Search Policy for Automated Theory Construction</h1>

<h2 id="overview">Overview</h2>

<p><em>Note: This section documents newly introduced design considerations as part of ongoing progress in the system.</em></p>

<p>The search loop in automated theory construction tends to exhibit a strong bias toward generating transformed statements derived from existing theorems, including generalizations and technical lemmas. While such statements are useful for extracting local properties of a theory, they do not necessarily contribute to a unified or compressed global structure.</p>

<p>To address this limitation, we introduce a <em>main-loop session</em> centered around identifying and proving structurally significant theorems (“main theorems”). This mechanism is designed to periodically reorganize and compress the accumulated theory.</p>

<hr />

<h2 id="main-loop-session">Main-Loop Session</h2>

<p>The main-loop session operates as follows:</p>

<ol>
  <li>
    <p><strong>Trigger Condition</strong>
Every time <em>N</em> new lemmas are added to <code class="language-plaintext highlighter-rouge">Derived.lean</code>, a main-theorem session is triggered.</p>
  </li>
  <li>
    <p><strong>Candidate Suggestion</strong>
The system analyzes <code class="language-plaintext highlighter-rouge">Derived.lean</code> and uses <code class="language-plaintext highlighter-rouge">main_theorem_suggester.md</code> to propose at most one candidate for a main theorem.
Strict filtering criteria are imposed, and proposing no candidate is explicitly allowed.</p>
  </li>
  <li>
    <p><strong>Proof Planning</strong>
If a candidate is proposed, <code class="language-plaintext highlighter-rouge">main_theorem_planner.md</code> is used to construct a natural-language proof plan.</p>
  </li>
  <li>
    <p><strong>Formalization Loop</strong>
Based on <code class="language-plaintext highlighter-rouge">.codex/agents.md</code> and <code class="language-plaintext highlighter-rouge">SKILL.md</code>, the system attempts to formalize the theorem in Lean.
The loop continues until all <code class="language-plaintext highlighter-rouge">sorry</code> placeholders are eliminated.</p>
  </li>
  <li>
    <p><strong>Post-Success Expansion</strong>
If formalization succeeds:</p>

    <ul>
      <li>The theorem is appended to <code class="language-plaintext highlighter-rouge">Derived.lean</code>.</li>
      <li><code class="language-plaintext highlighter-rouge">post_theorem_expander.md</code> is invoked to generate five new open problems.</li>
    </ul>
  </li>
</ol>

<p>This process introduces periodic global restructuring pressure into the otherwise local search dynamics.</p>

<hr />

<h2 id="pick-up-policy-open-problem-prioritization">Pick-Up Policy (Open Problem Prioritization)</h2>

<p>To improve search efficiency, we introduce a prioritization scheme for open problems.</p>

<h3 id="priority-levels">Priority Levels</h3>

<p>Each open problem is assigned one of three priority levels: <code class="language-plaintext highlighter-rouge">high</code>, <code class="language-plaintext highlighter-rouge">medium</code>, or <code class="language-plaintext highlighter-rouge">low</code>, determined by <code class="language-plaintext highlighter-rouge">open_problem_prioritizer.md</code>.</p>

<h3 id="core-rubric">Core Rubric</h3>

<ul>
  <li>
    <p><code class="language-plaintext highlighter-rouge">high</code></p>

    <ul>
      <li>Connects existing theorem clusters.</li>
      <li>Gives a strong equivalence, characterization, or existence statement.</li>
      <li>Looks likely to unlock many future problems or reorganize the theory.</li>
    </ul>
  </li>
  <li>
    <p><code class="language-plaintext highlighter-rouge">medium</code></p>

    <ul>
      <li>A natural local extension or useful nearby consequence.</li>
      <li>Likely to help only one or two nearby problems.</li>
    </ul>
  </li>
  <li>
    <p><code class="language-plaintext highlighter-rouge">low</code></p>

    <ul>
      <li>Cosmetic variant, shallow restatement, obvious weakening, or low-utility statement in the current <code class="language-plaintext highlighter-rouge">Derived.lean</code> context.</li>
      <li>Already looks effectively covered by current verified theorems up to a shallow reformulation.</li>
    </ul>
  </li>
</ul>

<h3 id="additional-policies">Additional Policies</h3>

<ul>
  <li>If an open problem fails twice, it is removed from the pool.</li>
  <li>
    <p>Priorities are periodically refreshed:</p>

    <ul>
      <li>After every <em>M</em> new additions.</li>
      <li>After each successful main-loop formalization.</li>
    </ul>
  </li>
</ul>

<p>This prioritization ensures that computational resources are allocated toward structurally meaningful progress.</p>

<hr />

<h2 id="scope-and-generality">Scope and Generality</h2>

<p>To improve general applicability, we relax the requirement that the system must always start strictly from axioms. However, to avoid drift during exploration, we impose the requirement that a core Lean theory file defining the domain must always be present.</p>

<p>This balance allows:</p>

<ul>
  <li>Controlled exploration within a defined domain.</li>
  <li>Flexibility in incorporating partially structured or pre-existing theories.</li>
</ul>

<hr />

<h2 id="research-direction">Research Direction</h2>

<p>The long-term objective of this framework is to:</p>

<ul>
  <li>Enable systematic rediscovery and exploration of niche or underdeveloped domains.</li>
  <li>Revisit and restructure “mature” or seemingly exhausted areas of theory.</li>
  <li>Improve usability and accessibility of automated theory construction systems.</li>
</ul>

<p>By combining local search with periodic global restructuring, the system aims to produce theories that are not only larger, but also more coherent and interpretable.</p>

<hr />

<h2 id="summary">Summary</h2>

<p>The introduction of the main-loop session and prioritization policies transforms the search process from purely accumulative to structurally aware. Instead of merely growing <code class="language-plaintext highlighter-rouge">Derived.lean</code>, the system actively attempts to compress, reorganize, and elevate the theory through strategically selected main theorems.</p>]]></content><author><name></name></author><category term="notes" /><category term="progress" /><category term="draft" /><summary type="html"><![CDATA[Main Loop Session and Search Policy for Automated Theory Construction]]></summary></entry><entry><title type="html">Progress</title><link href="https://tukamilano.github.io/automated-theory-construction-lean/notes/draft/2026/03/25/progress.html" rel="alternate" type="text/html" title="Progress" /><published>2026-03-25T20:45:00+09:00</published><updated>2026-03-25T20:45:00+09:00</updated><id>https://tukamilano.github.io/automated-theory-construction-lean/notes/draft/2026/03/25/progress</id><content type="html" xml:base="https://tukamilano.github.io/automated-theory-construction-lean/notes/draft/2026/03/25/progress.html"><![CDATA[<p>One thing that feels newly possible is a style of mathematical exploration that is not centered on a single target theorem from the beginning.</p>

<p>AI-generated questions can still improve, but even at this stage, it has become possible to automatically explore a surprisingly wide range of properties by manipulating expressions and checking what follows from them. That already changes the shape of the workflow.</p>

<p>There seems to be two different modes of doing mathematics: exploration mode and problem-solving mode.</p>

<p>Formalizing an important theorem is, of course, a valuable goal. But when working in an area that is still largely unexplored, it is just as important to build a broader understanding of the terrain around the result you care about. You need to know not only how to prove one theorem, but also what kinds of structures, patterns, and neighboring facts are present in the surrounding theory.</p>

<p>This repository is not mainly about formalizing one major theorem. Its more concrete aim is to give AI a kind of mathematical curiosity: to let it automatically discover, organize, and refine interesting facts within a domain. In the long run, I want this system to help build <code class="language-plaintext highlighter-rouge">Basic.lean</code> file for areas that have not yet been systematically developed.</p>

<p>That is the goal. At the moment, there are still some obvious challenges.</p>

<h2 id="interpretability">Interpretability</h2>

<p>Because I have been pushing toward a fully automated workflow, even I sometimes lose track of what the generated output is actually doing. The system may produce results, but the overall picture can become difficult to interpret from the developer’s side.</p>

<h2 id="where-to-start">Where To Start</h2>

<p>I still do not have a good answer to a basic question: how can we make the system write clean, well-motivated definitions, as you find in Mathlib?</p>]]></content><author><name></name></author><category term="notes" /><category term="draft" /><summary type="html"><![CDATA[One thing that feels newly possible is a style of mathematical exploration that is not centered on a single target theorem from the beginning.]]></summary></entry><entry><title type="html">Growing Theories with LLMs and Lean</title><link href="https://tukamilano.github.io/automated-theory-construction-lean/research/lean/llm/2026/03/23/automated-theory-construction-with-llms.html" rel="alternate" type="text/html" title="Growing Theories with LLMs and Lean" /><published>2026-03-23T09:00:00+09:00</published><updated>2026-03-23T09:00:00+09:00</updated><id>https://tukamilano.github.io/automated-theory-construction-lean/research/lean/llm/2026/03/23/automated-theory-construction-with-llms</id><content type="html" xml:base="https://tukamilano.github.io/automated-theory-construction-lean/research/lean/llm/2026/03/23/automated-theory-construction-with-llms.html"><![CDATA[<p>About six years ago, when I was in high school, I often felt overwhelmed by the rows of advanced mathematics books in large bookstores. At the same time, I was fascinated by the fact that such rich theories could emerge from relatively small sets of axioms.</p>

<p>Since then, I have been interested in the idea of generating mathematical structure automatically from simple foundations. Traditional automated theorem proving has been powerful for proving individual statements, but it has been less effective as a tool for exploring and extending theories at a higher level.</p>

<p>Recently, the mathematical abilities of large language models have improved substantially. They can assist with proofs, suggest reformulations, and sometimes propose useful follow-up questions. Motivated by this, I started the <a href="https://github.com/tukamilano/Automated_Theory_Construction"><code class="language-plaintext highlighter-rouge">automated-theory-construction-lean4</code></a> repository to explore how LLMs can complement formal verification and help with theory construction itself.</p>

<h2 id="overview">Overview</h2>

<p>The basic idea is simple: start from a small axiom system, introduce a few seed propositions, and then repeatedly try to formalize, verify, and extend the resulting theory.</p>

<p>Each iteration looks roughly like this:</p>

<ol>
  <li>Put a base axiom system and a collection of elementary seed propositions into an open problem queue.</li>
  <li>Retrieve one proposition from the queue and ask Codex CLI to formalize either a proof or a counterexample within a fixed time budget.</li>
  <li>If the proposition is successfully formalized and verified, add it to <code class="language-plaintext highlighter-rouge">Derived.lean</code> as a theorem.</li>
  <li>Use the logs produced during the attempt to decide what to do next:
if the proposition was not formalized, put it back into the queue together with its subgoals;
if it was formalized, generate more general candidate problems and add them to the queue.</li>
</ol>

<p>In this way, the system alternates between proving statements and proposing new directions for exploration.</p>

<p>The most important part is the prompt used to generate follow-up problems after a successful result:</p>

<div class="language-text highlighter-rouge"><div class="highlight"><pre class="highlight"><code>When the current problem is solved and verified (`verify_success = true` and `result = proof|counterexample`):
- Prefer outward-looking follow-up problems that extend the theory rather than merely staying near the last proof script.
- Favor, in roughly this order:
  1. natural generalizations or reusable abstractions
  2. converses, strict separations, or failure-of-converse statements
  3. existence, uniqueness, impossibility, or rigidity phenomena
  4. sharp boundary phenomena, minimal-hypothesis thresholds, or reusable structural dichotomies
  5. adjacent structural consequences that clarify the global shape of the theory
- It is good to return at least one candidate that meaningfully broadens, reinterprets, or reuses the verified result beyond the immediate local target.
- Prefer candidates whose resolution would teach something non-obvious about the theory or its models, rather than merely restating the solved fact in slightly altered form.
- If a more informative structural or threshold-style follow-up is available, prefer it over a nearby local rewrite.
- Also favor follow-up problems that vary the assumptions or structure of the theory to reveal robustness, thresholds, or failure modes.
</code></pre></div></div>

<p>The goal is not merely to stay close to the last proof, but to push outward toward statements that clarify the structure of the theory.</p>

<h2 id="how-it-works">How It Works</h2>

<p>The core loop lives in <code class="language-plaintext highlighter-rouge">scripts/run_loop.py</code>. At the moment, the implementation uses fixed paths:</p>

<ul>
  <li>theory: <code class="language-plaintext highlighter-rouge">AutomatedTheoryConstruction/Theory.lean</code></li>
  <li>accumulated theorems: <code class="language-plaintext highlighter-rouge">AutomatedTheoryConstruction/Derived.lean</code></li>
  <li>temporary verification file: <code class="language-plaintext highlighter-rouge">AutomatedTheoryConstruction/Scratch.lean</code></li>
  <li>initial seeds: <code class="language-plaintext highlighter-rouge">AutomatedTheoryConstruction/seeds.jsonl</code></li>
  <li>runtime state: <code class="language-plaintext highlighter-rouge">data/</code></li>
</ul>

<p>So this is not yet a fully generic multi-theory runner. Switching to a different theory currently requires editing these files directly.</p>

<p>Each iteration proceeds as follows:</p>

<ol>
  <li>Select the next open problem deterministically.</li>
  <li>If the problem is not already in Lean form, use <code class="language-plaintext highlighter-rouge">prover_statement</code> to translate it into a formal statement.</li>
  <li>Use <code class="language-plaintext highlighter-rouge">prover</code> to attempt a proof, a counterexample, or determine that the problem is currently stuck.</li>
  <li>Run <code class="language-plaintext highlighter-rouge">formalize</code>, then verify the result with:</li>
</ol>

<p><code class="language-plaintext highlighter-rouge">lake env lean AutomatedTheoryConstruction/Scratch.lean</code></p>

<ol>
  <li>If verification fails, invoke <code class="language-plaintext highlighter-rouge">repair</code> repeatedly until the retry budget is exhausted.</li>
  <li>If verification succeeds, append the resulting theorem to <code class="language-plaintext highlighter-rouge">Derived.lean</code>.</li>
  <li>Run <code class="language-plaintext highlighter-rouge">expand</code> to generate additional candidate problems.</li>
  <li>Update the system state deterministically (<code class="language-plaintext highlighter-rouge">open</code>, <code class="language-plaintext highlighter-rouge">solved</code>, <code class="language-plaintext highlighter-rouge">counterexamples</code>).</li>
</ol>

<p>Open problems may be either Lean-formal statements or semi-formal natural language prompts. Problems that cannot yet be formalized remain in the queue.</p>

<h2 id="three-stage-formalization">Three-Stage Formalization</h2>

<p>Proof formalization is split into three stages:</p>

<ol>
  <li>formalization of the statement</li>
  <li>natural language proof generation</li>
  <li>formalization of the proof in Lean</li>
</ol>

<p>This decomposition helps separate semantic understanding from syntactic verification.</p>

<h2 id="current-experiment">Current Experiment</h2>

<p>At the moment, I am experimenting with code adapted from <a href="https://github.com/SnO2WMaN/provability-toy">SnO2WMaN/provability-toy</a>, which I incorporated into <code class="language-plaintext highlighter-rouge">Theory.lean</code>. I also added the following four propositions to <code class="language-plaintext highlighter-rouge">seeds.jsonl</code>:</p>

<div class="language-lean highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="o">∀</span> <span class="err">{</span>α : <span class="kt">Type</span> <span class="n">u</span><span class="err">}</span> [<span class="n">ACR</span> α] [<span class="n">ACR</span><span class="o">.</span><span class="n">Prov</span> α] [<span class="n">ACR</span><span class="o">.</span><span class="n">Reft</span> α] [<span class="n">ACR</span><span class="o">.</span><span class="n">APS</span> α], <span class="o">∀</span> <span class="n">g</span> : <span class="n">ACR</span><span class="o">.</span><span class="n">G</span><span class="err">ö</span><span class="n">delFixpoint</span> α, <span class="n">g</span><span class="o">.1</span> <span class="o">≤</span> <span class="err">□</span><span class="n">g</span><span class="o">.1</span>
<span class="o">∀</span> <span class="err">{</span>α : <span class="kt">Type</span> <span class="n">u</span><span class="err">}</span> [<span class="n">ACR</span> α] [<span class="n">ACR</span><span class="o">.</span><span class="n">Prov</span> α] [<span class="n">ACR</span><span class="o">.</span><span class="n">Reft</span> α] [<span class="n">ACR</span><span class="o">.</span><span class="n">APS</span> α] [<span class="n">ACR</span><span class="o">.</span><span class="n">C5</span> α], <span class="o">∀</span> <span class="n">g</span> <span class="n">h</span> : <span class="n">ACR</span><span class="o">.</span><span class="n">G</span><span class="err">ö</span><span class="n">delFixpoint</span> α, <span class="n">g</span><span class="o">.1</span> <span class="o">≡</span> <span class="n">h</span><span class="o">.1</span>
<span class="o">∀</span> <span class="err">{</span>α : <span class="kt">Type</span> <span class="n">u</span><span class="err">}</span> [<span class="n">ACR</span> α] [<span class="n">ACR</span><span class="o">.</span><span class="n">Prov</span> α] [<span class="n">ACR</span><span class="o">.</span><span class="n">Reft</span> α] [<span class="n">ACR</span><span class="o">.</span><span class="n">APS</span> α] [<span class="n">ACR</span><span class="o">.</span><span class="n">C5</span> α] [<span class="n">Nonempty</span> (<span class="n">ACR</span><span class="o">.</span><span class="n">G</span><span class="err">ö</span><span class="n">delFixpoint</span> α)], <span class="err">⊠</span>(<span class="err">⊤</span> : α) <span class="o">≡</span> <span class="err">⊠⊠</span>(<span class="err">⊤</span> : α)
<span class="o">∀</span> <span class="err">{</span>α : <span class="kt">Type</span> <span class="n">u</span><span class="err">}</span> [<span class="n">ACR</span> α] [<span class="n">ACR</span><span class="o">.</span><span class="n">Prov</span> α] [<span class="n">ACR</span><span class="o">.</span><span class="n">Reft</span> α] [<span class="n">ACR</span><span class="o">.</span><span class="n">APS</span> α] [<span class="n">ACR</span><span class="o">.</span><span class="n">C5</span> α] [<span class="n">Nonempty</span> (<span class="n">ACR</span><span class="o">.</span><span class="n">G</span><span class="err">ö</span><span class="n">delFixpoint</span> α)], <span class="o">∃</span> <span class="n">g</span> : <span class="n">ACR</span><span class="o">.</span><span class="n">G</span><span class="err">ö</span><span class="n">delFixpoint</span> α, <span class="n">g</span><span class="o">.1</span> <span class="o">≡</span> <span class="err">⊠</span>(<span class="err">⊤</span> : α)
</code></pre></div></div>

<p>These are formalized versions of statements from Sections 2 and 3 of <a href="https://arxiv.org/abs/1602.05728v1">arXiv:1602.05728v1</a>.</p>

<p>The results obtained so far are available in <a href="https://gist.github.com/tukamilano/d25609aeb416005e24be23308c4abd3d">this gist</a>. I am currently receiving feedback on the generated code.</p>

<p>I attach the <a href="https://chatgpt.com/share/69c0ea0a-8d40-8008-bc0b-892de6a6b429">ChatGPT response</a> for reference.</p>

<h2 id="what-i-still-do-not-fully-understand">What I Still Do Not Fully Understand</h2>

<p>I am not a specialist in provability logic, so there are still parts of the underlying mathematics that I do not fully understand. For now, I see this project primarily as an experiment in how LLMs and Lean can be combined to support theory exploration.</p>

<p>I plan to study this area more carefully and write a more detailed explanation later. If you work in provability logic or related areas, I would be very happy to hear your thoughts.</p>

<h2 id="goals">Goals</h2>

<p>One major goal is to enable the system to incorporate more general propositions into the theory as theorems, especially statements that are less tightly tied to a specific internal language.</p>

<p>Ideally, I would also like the resulting theories to acquire a consistent style and structure, closer to files such as <code class="language-plaintext highlighter-rouge">Basic.lean</code> in mathlib.</p>

<p>In logic, language theory, and type theory, it is common to come up with small axiom systems whose importance is not immediately clear. As a result, they tend to be deprioritized. I hope this project can serve as a tool for exploring what kinds of theories emerge from such systems and for deciding which ones are worth developing further.</p>

<p>I plan to keep improving the system as AI tools continue to advance. If you have small axiom systems you would like to experiment with, feel free to let me know.</p>

<h2 id="acknowledgments">Acknowledgments</h2>

<p>I am grateful to SnO2WMaN for publishing <code class="language-plaintext highlighter-rouge">provability-toy</code>.</p>]]></content><author><name></name></author><category term="research" /><category term="lean" /><category term="llm" /><summary type="html"><![CDATA[About six years ago, when I was in high school, I often felt overwhelmed by the rows of advanced mathematics books in large bookstores. At the same time, I was fascinated by the fact that such rich theories could emerge from relatively small sets of axioms.]]></summary></entry></feed>