[exploding note] Apply to Mentor Secure Program Synthesis Fellowship by May 5th

https://apartresearch.com/fellowships/the-secure-program-synthesis-fellowship A ton of you reading this would make great mentors— but the mentor application deadline is a mere week away, so we have an emergency edition of the newsletter to persuade you to apply.
In this sentence, I am very informally and unofficially implying that direct AI security widgets/products will get Very Special Bonus Points in the mentor peer review / rating that I do.
Additionally, there will be a hackathon May 22-24 on the same topics.
Dates: June - September 2026Gross World LoC (Line of Code) is skyrocketing due to AI. By default, we have no way of knowing if what we’re vibecoding is doing what we think its doing. Secure program synthesis, on the other hand, is the intervention for throwing advanced software correctness techniques at all the code coming out of the AIs.
This fellowship offers part-time research opportunities on mentor-led projects at the intersection of formal methods, AI systems, and security. Participants work in small teams to tackle challenging, underspecified problems in specification, validation, and adversarial robustness.A joint initiative of Apart Research and https://atlascomputing.org/.
Key Dates
- April 28th - May 5th 2026: Mentor Applications open
- May 10th 2026: Mentors Announced and Participant Applications open
- May 26th 2026: Participant Applications Close
- June 9th 2026: Participants Announced
- June 15th 2026: Projects Begin
- July/August 2026: Mid-Project Presentations & Milestone Submissions
- August/Septemer 2026: Final submissions
- September/October 2026: Demo day
Focus Areas
Specification Elicitation
Develop tools and workflows for extracting formal specifications from ambiguous, distributed, or implicit sources (e.g., documentation, legacy systems, human stakeholders). Projects may include structured editors, GUIs, or pipelines that translate informal requirements into formal representations (e.g., Lean), building on approaches like “SpecIDE.”
Specification Validation
Design methods to verify that extracted specifications are correct and complete. This includes techniques for testing, cross-checking, or formally validating whether a specification accurately captures intended system behavior.
Spec-Driven Development & Evaluation
Explore workflows where specifications generate multiple candidate implementations, which are then evaluated against each other. Projects may involve building “arenas” to compare robustness, correctness, or performance across implementations derived from the same spec.
Adversarial Robustness for FM & QA Tools
Investigate failure modes in formal methods pipelines, LLM-assisted tooling, and QA systems. Projects may focus on adversarial inputs, robustness guarantees, or identifying weaknesses in automated reasoning and verification systems.
Full list of mentor projects coming soon.
https://www.lesswrong.com/posts/SJdjLg5zSqrb2kMc7/exploding-note-apply-to-mentor-secure-program-synthesis#comments
https://www.lesswrong.com/posts/SJdjLg5zSqrb2kMc7/exploding-note-apply-to-mentor-secure-program-synthesis

https://apartresearch.com/fellowships/the-secure-program-synthesis-fellowship A ton of you reading this would make great mentors— but the mentor application deadline is a mere week away, so we have an emergency edition of the newsletter to persuade you to apply.
In this sentence, I am very informally and unofficially implying that direct AI security widgets/products will get Very Special Bonus Points in the mentor peer review / rating that I do.
Additionally, there will be a hackathon May 22-24 on the same topics.
Dates: June - September 2026Gross World LoC (Line of Code) is skyrocketing due to AI. By default, we have no way of knowing if what we’re vibecoding is doing what we think its doing. Secure program synthesis, on the other hand, is the intervention for throwing advanced software correctness techniques at all the code coming out of the AIs.
This fellowship offers part-time research opportunities on mentor-led projects at the intersection of formal methods, AI systems, and security. Participants work in small teams to tackle challenging, underspecified problems in specification, validation, and adversarial robustness.A joint initiative of Apart Research and https://atlascomputing.org/.
Key Dates
- April 28th - May 5th 2026: Mentor Applications open
- May 10th 2026: Mentors Announced and Participant Applications open
- May 26th 2026: Participant Applications Close
- June 9th 2026: Participants Announced
- June 15th 2026: Projects Begin
- July/August 2026: Mid-Project Presentations & Milestone Submissions
- August/Septemer 2026: Final submissions
- September/October 2026: Demo day
Focus Areas
Specification Elicitation
Develop tools and workflows for extracting formal specifications from ambiguous, distributed, or implicit sources (e.g., documentation, legacy systems, human stakeholders). Projects may include structured editors, GUIs, or pipelines that translate informal requirements into formal representations (e.g., Lean), building on approaches like “SpecIDE.”
Specification Validation
Design methods to verify that extracted specifications are correct and complete. This includes techniques for testing, cross-checking, or formally validating whether a specification accurately captures intended system behavior.
Spec-Driven Development & Evaluation
Explore workflows where specifications generate multiple candidate implementations, which are then evaluated against each other. Projects may involve building “arenas” to compare robustness, correctness, or performance across implementations derived from the same spec.
Adversarial Robustness for FM & QA Tools
Investigate failure modes in formal methods pipelines, LLM-assisted tooling, and QA systems. Projects may focus on adversarial inputs, robustness guarantees, or identifying weaknesses in automated reasoning and verification systems.
Full list of mentor projects coming soon.
https://www.lesswrong.com/posts/SJdjLg5zSqrb2kMc7/exploding-note-apply-to-mentor-secure-program-synthesis#comments
https://www.lesswrong.com/posts/SJdjLg5zSqrb2kMc7/exploding-note-apply-to-mentor-secure-program-synthesis