top of page
Search

The Discovery Conversation: What You Should Ask AI Agent Providers

  • Writer: Tayana Solutions
    Tayana Solutions
  • 1 day ago
  • 5 min read

 

The Evaluation Challenge 

AI agent providers range from platform vendors to implementation consultants to full-service operators. Discovery conversations determine fit between your needs and provider capabilities. 

 

Poor vendor selection creates implementation struggles regardless of technology capability. The right questions reveal whether provider understands your operational reality. 

 

Implementation Approach Questions 

1. "How do you typically structure the pilot implementation?" 

What to listen for: 

  • Defined phases with clear milestones 

  • Realistic timelines (6-10 weeks typical) 

  • Staff involvement requirements stated upfront 

  • Limited scope that proves concept without overwhelming 

Red flags: 

  • Vague timeline ("depends on your needs") 

  • No mention of staff time requirements 

  • Promises of immediate full production deployment 

  • Unwillingness to start with limited scope 

 

2. "What level of staff involvement do you require during implementation?" 

What to listen for: 

  • Specific hour estimates (typically 4-6 hours weekly for 6-8 weeks) 

  • Clear roles (who needs to participate) 

  • Timing flexibility acknowledged 

  • Understanding of capacity constraints 

Red flags: 

  • "Minimal involvement required" 

  • Expectation of constant availability 

  • Assumption that staff will immediately adopt new workflows 

  • No discussion of change management 

 

3. "How do you handle rule definition and conversation script development?" 

What to listen for: 

  • Collaborative workshop approach 

  • Recognition that you know your business rules 

  • Iterative refinement process 

  • Examples from similar implementations 

Red flags: 

  • "We have standard scripts that work for everyone" 

  • No discussion of your specific decision criteria 

  • Reluctance to customize approach 

  • Claims that minimal customization is needed 

 

4. "What happens if pilot results do not meet expectations?" 

What to listen for: 

  • Clear success metrics defined upfront 

  • Willingness to iterate and adjust 

  • Discussion of what constitutes pilot success 

  • Reasonable exit options if fundamentally not working 

Red flags: 

  • Defensive response 

  • No clear success criteria 

  • Contractual lock-in regardless of results 

  • Blame-shifting language about implementation failures 

 

 

Technical Requirements Questions 

5. "What ERP integrations have you completed successfully?" 

What to listen for: 

  • Specific experience with your ERP platform 

  • Understanding of API capabilities and limitations 

  • Reference customers using similar ERP 

  • Realistic assessment of integration complexity 

Red flags: 

  • "We integrate with everything" 

  • No specific experience with your platform 

  • Underestimation of integration complexity 

  • No reference customers with your ERP 

 

6. "What data quality is required from our ERP system?" 

What to listen for: 

  • Specific requirements (contact data, phone numbers, emails) 

  • Acknowledgment that perfect data is not necessary 

  • Willingness to work with real-world data quality 

  • Guidance on acceptable quality thresholds 

Red flags: 

  • "Your data must be perfect" 

  • No discussion of data quality assessment 

  • Assumption all customer records are complete 

  • Unwillingness to handle data gaps 

 

7. "How does the agent handle situations outside defined rules?" 

What to listen for: 

  • Clear escalation process to human staff 

  • Documentation of what agent could not handle 

  • Learning from escalations to improve rules 

  • Realistic acknowledgment that 20-40% escalate 

Red flags: 

  • Claims of handling everything autonomously 

  • No discussion of escalation workflows 

  • Assumption that rules will be comprehensive upfront 

  • Dismissal of edge cases 

 

 

Platform and Cost Questions 

8. "What is your complete pricing model including all platform costs?" 

What to listen for: 

  • Transparent breakdown of all costs 

  • Consulting/implementation fees separate from platform fees 

  • Usage-based platform costs explained clearly 

  • No hidden costs for essential features 

Red flags: 

  • Vague "depends on volume" without ranges 

  • Essential capabilities priced as add-ons 

  • Unwillingness to provide ballpark estimate 

  • Significant price increases after pilot 

 

9. "What platforms do you use and why?" 

What to listen for: 

  • Specific platform names (OpenAI, Anthropic for AI, specific voice platforms) 

  • Technical reasoning for platform choices 

  • Discussion of platform stability and reliability 

  • Transparent about platform dependencies 

Red flags: 

  • Proprietary platforms only they control 

  • Reluctance to name specific platforms 

  • Claims of "superior" proprietary AI 

  • Vendor lock-in to their specific platform 

 

10. "What happens if we want to change providers later?" 

What to listen for: 

  • Discussion of your ownership of conversation scripts and rules 

  • Data portability 

  • Reasonable transition assistance 

  • Recognition that lock-in is concern 

Red flags: 

  • Deflection of question 

  • Claims that switching would be prohibitively difficult 

  • Proprietary formats that prevent portability 

  • Contractual barriers to transition 

 

 

Success Metrics Questions 

11. "How do you measure implementation success?" 

What to listen for: 

  • Specific metrics (exception handling rates, time savings, DSO improvement) 

  • Baseline measurement approach 

  • Realistic success thresholds (60-80% handling rate) 

  • Willingness to define metrics collaboratively 

Red flags: 

  • Vague "customer satisfaction" metrics 

  • No discussion of baseline measurement 

  • Unrealistic success claims (95%+ automation) 

  • Resistance to defining clear metrics 

 

12. "What results have you seen with companies similar to ours?" 

What to listen for: 

  • Specific examples with numbers 

  • Similar company size and exception volume 

  • Honest discussion of what works and what does not 

  • Willingness to provide reference contacts 

Red flags: 

  • Only perfect success stories 

  • No specifics about results 

  • Reluctance to provide references 

  • Claims that "every implementation is unique" to avoid specifics 

 

 

Support and Ongoing Operations Questions 

13. "What support do you provide after go-live?" 

What to listen for: 

  • Clear support model (hours, response times) 

  • Ongoing refinement included or separate 

  • Technical monitoring and issue resolution 

  • Long-term partnership orientation 

Red flags: 

  • Support as expensive add-on 

  • Minimal post-implementation involvement 

  • No ongoing refinement included 

  • "Set it and forget it" mentality 

 

14. "How do you handle agent performance issues?" 

What to listen for: 

  • Monitoring and alerting systems 

  • Clear process for addressing quality problems 

  • Responsibility model (who does what) 

  • Commitment to resolution timeframes 

Red flags: 

  • Assumption that problems will not occur 

  • No monitoring infrastructure mentioned 

  • Vague about who handles issues 

  • Slow response time commitments 

 

15. "What happens when exception volume grows significantly?" 

What to listen for: 

  • Usage-based pricing scales reasonably 

  • Platform can handle volume increases 

  • No re-architecture required for growth 

  • Partnership orientation for scaling 

Red flags: 

  • Major price jumps at volume thresholds 

  • Platform limitations at higher volumes 

  • Required reimplementation for scale 

  • Penalizes your success with high costs 

 

 

Red Flag Summary 

Walk away if provider: 

  • Cannot provide specific examples with results 

  • Has no experience with your ERP platform 

  • Promises unrealistic success rates (90-100%) 

  • Requires long contractual commitments before pilot 

  • Is vague about costs or has hidden fees 

  • Shows no understanding of change management 

  • Cannot explain technical approach clearly 

  • Has no references you can contact 

  • Dismisses your concerns or constraints 

  • Oversells capability without acknowledging limitations 

 

Proceed cautiously if provider: 

  • Has limited but some relevant experience 

  • Uses platforms but cannot explain technical details 

  • Provides cost estimates but not detailed breakdown 

  • Offers references but limited to specific use cases 

 

Good signs from provider: 

  • Specific examples with measurable results 

  • Direct experience with your ERP and industry 

  • Realistic expectations about success rates (60-80%) 

  • Flexible pilot approach with clear exit options 

  • Transparent about costs including platform fees 

  • Acknowledges implementation challenges 

  • Can explain technical approach clearly 

  • Provides multiple relevant references 

  • Asks detailed questions about your operations 

  • Discusses what does not work, not just what works 

 

 

The Reality 

Discovery conversations reveal whether provider understands operational reality versus just selling technology. The right questions expose experience depth, technical capability, and partnership orientation. 

 

Good providers ask as many questions as they answer. They seek to understand your specific situation before proposing solutions. They acknowledge limitations and implementation challenges rather than making unrealistic promises. 

 

 

About the Author 

This content is published by ERP AI Agent, a consulting practice specializing in AI agents for mid-market ERP exception processes. 

 

 

Published: January 2025 Last Updated: January 2025 Reading Time: 7 minutes 

 

Recent Posts

See All

Comments


bottom of page