CSUN 2026: Highlights and Takeaways
The 41st Annual CSUN Assistive Technology Conference was once again an explosion of innovation and our team were in attendance throughout the conference.
DAISY CTO, Avneesh Singh, shares with us an overview of the sessions he joined.
Digital Accessibility Trends and Insights for 2026 (Deque)
This session highlighted a growing tension between accessibility efforts and the rapid rise of AI-generated content. Speakers pointed out that many accessibility challenges today stem from broken processes—slow workflows create backlogs that eventually turn into technical debt. With AI now generating large volumes of code, often trained on inaccessible web content, there is a risk of scaling these issues further. A key message was that
technical compliance alone is not enough; accessibility must focus on real human experience
AI PDF Conversion: Escaping Static Files into Semantic Freedom
This session presented a large-scale open source initiative from the University of Illinois Chicago to convert PDFs into accessible formats using AI. The workflow involves extracting text, interpreting document structure using AI, validating it against the visual layout, and generating image descriptions through a separate AI system. The output is structured content such as Markdown and HTML. The project aims to process over two million PDFs, demonstrating how AI can be used not just for conversion but for creating semantically meaningful and machine-readable documents at scale.
AI for Accessibility: Balancing Automation with Human Oversight
This session explored how AI complements traditional rule-based automation in accessibility. While automation remains effective for predictable issues like missing labels or incorrect roles, AI adds the ability to understand context and identify patterns. It can suggest fixes and even generate code, but it struggles with business logic and user intent. Speakers emphasized that AI must be guided by human expertise, with strong guardrails in place.
Automation Beyond Axe Core (Deque)
Focusing on the evolution of accessibility testing tools, this session described how the aXe ecosystem is expanding beyond rule-based checks. The architecture now includes layers for analysis and remediation, supported by AI capabilities. New integrations with tools like Slack and Teams bring accessibility into everyday workflows.
AI and Web Accessibility in 2026: A Progress Report
This session introduced emerging capabilities such as agent-based AI integrated into assistive technologies like JAWS. These agents can analyze interfaces, assist with complex tasks, and even perform actions on behalf of users. For example, they can help navigate complex websites or complete forms. However, these capabilities come with risks, as AI agents can make incorrect decisions. The session illustrated how assistive technologies are evolving beyond passive tools into more active, task-oriented systems.
Productivity Reimagined: Leveraging AI and Copilot in M365 Apps
This session showcased how AI is being integrated into everyday productivity tools like Microsoft Word, Outlook, and PowerPoint. Features include automatic generation of alt text for images, document summarization, email review, and formatting assistance. Accessibility improvements such as better logical reading order, enhanced table handling, and improved accessibility checking were also highlighted. The tools are designed to work within existing workflows, giving users control while embedding accessibility into routine tasks rather than treating it as a separate activity.
What’s New in Google Accessibility
Google presented a wide range of updates across its platforms, with a strong focus on AI integration. Features such as Gemini-powered screen understanding, live captions, and contextual interaction were demonstrated. Enhancements in Android accessibility included improvements to TalkBack, expressive captions, and better support for hearing devices. New developments in areas like extended reality, gesture-based controls, and multimodal interaction show how accessibility is expanding beyond traditional interfaces into more immersive and dynamic environments.
Can Today’s Smart Glasses Do What Many Blind People Really Need?
This session compared multiple smart glasses systems across real-world tasks such as object recognition, navigation, and document handling. Different devices performed better in different scenarios—for example, one excelled at identifying objects while another handled document-related tasks more effectively. The comparison highlighted that while these technologies are advancing quickly, they are still specialized and context-dependent, with no single solution addressing all user needs. At the end one product did look better than others, but this is fast changing AI world.
AI and AT: Balancing Security with Productivity
This session examined the challenges of using AI within secure environments. While AI can provide powerful capabilities such as summarizing content and interpreting visual information, it also raises concerns about data privacy and predictability. Different deployment models—public cloud, enterprise-hosted, and on-device AI—offer varying levels of control and risk. Assistive technologies must adapt to these constraints, allowing organizations to configure how AI is used while maintaining both accessibility and security.
Cognitive Accessibility and Content Design
Discussions on cognitive accessibility focused on making content easier to understand and navigate. Key recommendations included using plain language, organizing content from main ideas to details, keeping sentences and paragraphs short, and avoiding complex navigation structures. Clear headings, consistent structure, and explicit instructions were emphasized. The session also highlighted the importance of not hiding helpful information, which is available to assistive technologies but making it available to all users, benefiting a broader audience.
GitHub Continuous AI for Accessibility
This session demonstrated how AI is being integrated into development platforms like GitHub to support accessibility. Tools can now automatically detect accessibility issues, create issues in repositories, and assign them for resolution. AI assistants can help generate code, review changes, and suggest improvements.
Apple Accessibility Session (Friday)
This session showcased how accessibility is deeply integrated across Apple’s ecosystem, with a strong emphasis on on-device intelligence, personalization, and seamless cross-device experiences. Demonstrations included features like Magnifier on Mac, which allows users to stream and enhance real-world visuals, and Braille Screen Input and display integration, enabling Apple devices to function effectively as braille note-taking tools. The session also highlighted improvements in speech accessibility, including personalized voice generation using on-device machine learning, and features like Live Listen, which can amplify real-world audio through connected devices.
A significant focus was on reducing cognitive and physical effort through features such as Assistive Access, which simplifies the interface for users with cognitive disabilities, and Accessibility Reader, which allows customization of fonts, spacing, and colors for better readability. Mobility features like AssistiveTouch, Voice Control, and eye tracking were demonstrated, showing how users can interact with devices without traditional input methods. Additional innovations included Vehicle Motion Cues to reduce motion sickness, background sounds to improve focus, and Name Recognition to alert users when their name is spoken.
Reading and Writing Math with a Screen Reader: Third Edition
This session demonstrated how recent advances in screen readers, MathML, and Microsoft Word are making it significantly easier for blind and visually impaired users to both read and write mathematics independently. Presenters from the DAISY Consortium and Microsoft showed how math content encoded in MathML can be navigated using speech and braille, allowing users to explore equations step by step and understand their structure. Tools like MathCAT were highlighted for providing clear speech output and support for Nemeth and UEB braille. The session also showcased how users can move seamlessly between formats—such as reading math in EPUB or web content and copy pasting it in MS Word for editing it—using built-in equation tools with improved screen reader support. Overall, the session illustrated a more integrated workflow for accessing, creating, and interacting with mathematical content across platforms.
Our thanks to Avneesh for his thought provoking overview.
