Guides

What Every IT Manager Should Know About AI Before Their Next Vendor Meeting

David ·

I sat in a vendor demo last month where the sales engineer said the phrase "AI-powered" fourteen times in twenty minutes. I counted. Their product was a help desk ticketing system with auto-categorization. That's not AI-powered. That's a keyword classifier that's existed since the 2000s. But "AI-powered" is the magic phrase that opens budgets right now, so every vendor in your stack has suddenly discovered they've been doing AI all along.

If you're an IT manager heading into vendor renewals, procurement meetings, or product evaluations, you need to be able to cut through the marketing. Here's how.

The Five Questions That Reveal Everything

1. "What specific problem does the AI component solve that the product didn't solve before?"

This is the killer question. Legitimate AI integration solves a specific, identifiable problem. "Our AI analyzes user behavior patterns to detect account compromise" — that's a real problem with a real AI solution. "Our AI powers a smarter, more intuitive user experience" — that's marketing copy for a slightly better UI.

If the vendor can't point to a specific capability that AI enables — one that didn't exist in the pre-AI version of the product — the AI is cosmetic.

2. "What data does the AI model train on, and is it my data?"

This question accomplishes two things. First, it tells you whether the vendor actually understands their own AI implementation (you'd be surprised how many sales engineers can't answer this). Second, it surfaces critical privacy and security concerns.

Good answers: "The model is pre-trained on [dataset], and fine-tuned on aggregated, anonymized data from our customer base. Your specific data is not used for training unless you opt in, and we can provide our data processing agreement." Bad answers: anything vague about "proprietary data" or "the cloud."

3. "What's the false positive rate, and how was it measured?"

Every AI system has false positives. The question is how many, and whether the vendor has measured them honestly. Ask for the false positive rate specifically on alert types or use cases that match your environment. Ask how it was tested — on their internal test data, on customer production data, or on an independent benchmark?

If the vendor can't provide a false positive rate, they haven't tested their AI in production conditions. If they provide one but can't explain the testing methodology, treat the number with deep skepticism.

4. "Can you show me the AI's reasoning for a specific decision?"

Explainability is the difference between a tool you can trust and a black box you're gambling on. Ask the vendor to pull up a real example (or a demo example) and walk through how the AI reached its conclusion. Not the marketing version — the actual factors, weights, and logic.

For some AI applications (deep learning on network traffic, for example), full explainability isn't realistic. In those cases, ask for the next best thing: what factors contributed most to the decision, and can you see them? If the answer is "the AI just knows," that's not acceptable for a security tool that you're using to make risk decisions.

5. "What happens when the AI is wrong?"

This question tests the maturity of the AI implementation. Good answers include: "Analysts can flag incorrect determinations, which feeds back into model tuning. We provide a monthly accuracy report. You can set confidence thresholds below which the AI defers to human judgment." Bad answers: deflection, or claims that the AI is "over 99% accurate" without context.

Every AI system is wrong sometimes. What matters is whether the vendor has built infrastructure to handle that gracefully — feedback loops, confidence thresholds, human escalation paths, and continuous improvement processes.

Red Flags in Vendor AI Claims

The Black Box

"We can't disclose how the AI works due to proprietary IP." No. You're not asking for source code. You're asking for a basic explanation of methodology — supervised learning vs. unsupervised, what features it uses, how it's validated. If a vendor won't provide that level of transparency for a security product, walk away. You can't manage risk from a tool when you don't understand how it reaches its conclusions.

The Moving Goalposts

"The AI improves over time." Sure, but what's the baseline? What's the current accuracy? What's the improvement trajectory? "Gets better" without measurements is a feature request dressed up as a feature.

The AI-Washed Legacy Product

This is the most common offender. A product that was built on rules and signatures five years ago now has "AI" in its marketing because they added a machine learning model for one minor feature. The core product is the same. Ask what percentage of the product's detection/analysis/automation relies on AI versus traditional methods. If it's 10% AI and 90% signatures, the "AI-powered" label is misleading.

The "AI-Powered" Chatbot

Adding a natural language interface to a product is not "AI-powered" in any meaningful sense for security. If the vendor's big AI feature is that you can type questions instead of clicking buttons, that's a UX improvement, not a security capability. It might be convenient. Don't pay a premium for it.

No Independent Validation

Has the AI's performance been tested by anyone other than the vendor? Independent lab results, customer case studies with specific numbers, or analyst firm evaluations all count. Self-reported accuracy on self-selected test cases does not.

What Actually Matters When Evaluating AI Features

Integration With Your Existing Stack

The best AI feature in the world is worthless if it doesn't integrate with your SIEM, your ticketing system, your identity provider, and your workflow tools. AI features that require you to check a separate dashboard are AI features that won't get used after the first month.

Total Cost of Ownership

AI features often come with hidden costs: increased compute for on-prem deployments, per-query API charges for cloud services, additional licensing tiers to access the "AI-enhanced" version. Ask for the all-in cost, including the AI features, and compare it to the pre-AI pricing. Is the AI delta worth the AI value?

Vendor Lock-In Risk

Some AI features increase your dependency on the vendor. If the AI learns from your data over time, switching vendors means losing that learned context. Ask about data portability: can you export your tuning data, your custom rules, your false-positive feedback? Or does that institutional knowledge stay with the vendor?

Realistic Deployment Timeline

AI features often need a learning period before they're effective. "The AI needs 30 days of baseline data before detection is reliable" is fine — as long as you know that upfront and plan for it. Ask the vendor for the realistic time to value, not the best-case demo scenario.

Your Pre-Meeting Prep Checklist

  • Identify the specific problem you're trying to solve (not "we need AI" — that's not a problem statement)
  • Research the vendor's AI claims in advance — read their technical documentation, not just marketing pages
  • Prepare the five questions above, customized for your environment
  • Bring a technical team member who can validate claims in real time
  • Request a proof of concept in YOUR environment before committing to purchase
  • Ask for references from customers with similar environments and use cases

The AI wave is real. Genuinely useful AI capabilities exist in security products today. But for every legitimate AI feature, there are five marketing-driven claims that don't survive scrutiny. Your job is to tell the difference — and now you know how.