Patterns and frameworks for AI UX
Practical Design Guidelines for Choosing the Right AI Interaction Models
April 20, 2025
Joel Tinley
Introduction
The emergence of artificial intelligence in product design is fundamentally shifting how we approach certain user experiences. As AI capabilities evolve from basic automation to sophisticated reasoning and generation, designers face new challenges in creating intuitive, effective interfaces that harness these capabilities while maintaining user trust and control. Through my work in complex domains like commercial real estate and enterprise computing, I've developed frameworks that address when and how to implement different AI interaction models.
The Interaction Model Spectrum
The most fundamental decision in AI UX design is determining the appropriate interaction model. This exists on a spectrum with three primary approaches:
Conversational Interfaces: Natural language interaction where users express needs through conversation
Direct Manipulation Controls (DMCs): Traditional GUI elements that users directly interact with
Hybrid Interfaces: Thoughtful combinations of conversational and direct manipulation elements
Rather than viewing these as competing approaches, I've found it more valuable to identify the contextual factors that determine which model best serves user needs in specific situations.
ChatGPT's Canvas feature - A great example of a hybrid interface - conversational and direct manipulation in one user experience.
Decision Frameworks for Interaction Models
Through extensive user research and implementation experience, I've identified key contextual dimensions that inform interaction model decisions:
Task Abstraction Level
When tasks are highly abstract, strategic, or open-ended, conversational interfaces excel. These scenarios require interpretation, synthesis, and reasoning—capabilities where AI can provide significant value. For example, asking "What are the top 5 properties with the highest ROI in the Midwest?" is better suited to conversational interaction.
Conversely, for concrete, granular tasks requiring precision, DMCs remain superior. A user adjusting specific parameters like price range, location filters, or property type benefits from the immediate tactile feedback and precision of sliders, checkboxes, and other familiar controls.
Cognitive Load and Familiarity
Cognitive load significantly impacts which interaction model works best. For users with low familiarity or experiencing high cognitive load—such as during onboarding or when navigating unknown systems—conversational interfaces reduce barriers by providing guidance and context. The system can surface relevant context or act as a guide when users feel overwhelmed.
For experienced users under low cognitive load with clear goals, direct controls or simple search inputs prove more efficient. These users benefit from direct, efficient controls they already understand rather than the potential verbosity of conversation.
Usage Frequency
Usage patterns dictate different interaction needs. Infrequent, exploratory users benefit from conversational approaches that lower entry barriers and provide scaffolding. A question like "What can I do with this tool?" is ideally handled conversationally.
Frequent users performing routine tasks gravitate toward the speed and efficiency of direct manipulation. A broker updating listing filters daily to match client preferences needs efficient, predictable controls rather than conversation.
Iterative Refinement Needs
Some tasks require iterative refinement with context retention. For instance, a user might ask to "Show me listings in LA" followed by "Now add properties under $2M." Conversational or hybrid models excel here by maintaining context between interactions.
When immediate feedback during adjustment is crucial, DMCs prove superior. Adjusting a square footage slider and seeing results update in real-time offers an immediacy that conversational back-and-forth cannot match.
Result Complexity
The nature of the results also guides interaction model choice:
Simple, textual results (like average cap rates) can be efficiently delivered through conversational or search interfaces
Complex, multimodal results benefit from hybrid approaches, where conversation might initiate the query, but direct manipulation allows users to filter, sort, or visualize the results
Frameworks for Agentic AI Systems
Beyond interaction models, working with agentic AI systems—those that perform tasks semi-autonomously on users' behalf—requires sophisticated frameworks for establishing appropriate autonomy, visibility, and handoff mechanisms.
Autonomy Spectrum
I've developed an autonomy framework that defines appropriate levels of independent decision-making for AI agents:
Observer Agents: Only monitor and report (e.g., tracking market changes)
Assistant Agents: Suggest actions but require approval (e.g., proposed lease terms)
Delegate Agents: Take defined actions independently (e.g., scheduling property tours)
Autonomous Agents: Make complex decisions within boundaries (e.g., negotiating minor lease terms)
The right level of autonomy depends on task sensitivity, predictability, consequences, and user comfort.
Trust-Building Mechanisms
For users to adopt AI agents, trust mechanisms must be deliberately designed:
Expertise Signaling: Ways agents demonstrate domain knowledge
Consistency Patterns: Predictable behaviors that build confidence
Error Acknowledgment: How agents handle and communicate mistakes
Value Demonstration: Quick wins that show tangible benefits
Authority Frameworks
Clear authority frameworks establish boundaries for agent actions:
Permission Models: How users grant/restrict agent capabilities
Approval Workflows: Processes for requesting permission for sensitive actions
Override Mechanisms: How users can correct or redirect agent activities
Delegation Controls: Ways users specify which tasks agents can handle independently
Feedback Loops
Effective agents improve through continuous feedback:
Explicit Feedback: How users directly evaluate and correct agent actions
Implicit Feedback: How agents learn from user behavior and modifications
Learning Mechanisms: How individual and collective feedback improves agents over time
Implementation Principles
When implementing these frameworks, several principles guide successful outcomes:
Context-Sensitive Transitions: Design smooth transitions between interaction models based on changing user needs
Progressive Disclosure: Reveal AI capabilities gradually as users become comfortable
Transparent Intelligence: Make AI reasoning and capabilities visible without overwhelming users
Graceful Failure: Design for elegant degradation when AI reaches its limitations
Learning Loops: Incorporate mechanisms for continuous improvement based on usage patterns
Conclusion
The most successful AI experiences don't force users into a single interaction paradigm but thoughtfully apply the right model for each context. By analyzing dimensions like task abstraction, cognitive load, usage frequency, and result complexity, we can create experiences that leverage AI capabilities while respecting users' needs for control, efficiency, and understanding.
As AI continues to evolve, these frameworks provide a foundation for designing experiences that balance automation with human agency, technological capability with usability, and efficiency with trust. The challenge isn't choosing between conversation and direct manipulation—it's knowing when and how to employ each approach within a cohesive experience that amplifies human capabilities through thoughtful AI integration.


