Back to Overview2024–2025

AI Engineering & LLMs

Tools, frameworks, and challenges in building AI and LLM-based applications.

AI/LLM Systems in Production

54% have no LLM systems in production; 28% have one. 16% have 2–5 and 2% have 5+. Most organizations were still in experimentation phase.

For which use cases do you currently employ AI/LLMs-based applications? (Select all that apply)

Code generation(62%)Question-answering on internal knowledge(58%)Document summarization / classification(55%)Content generation(32%)Customer support automation(26%)Data annotation(17%)Autonomous agents(14%)Agentic interactions(13%)Content moderation / quality control(10%)

Code generation (62%) and Q&A on knowledge bases (58%) lead. Document summarization (55%) and content generation (31%) follow. Customer support (25%) and data annotation (17%) are next. Use cases were focused on productivity.

Which managed LLM services or cloud-based providers do you use? (Select all that apply)

OpenAI dominates at 73%; Anthropic (24%) is next. 21% don't use managed services. AWS Bedrock (11%) and Groq (12%) have smaller shares. Managed services were clearly preferred.

Do you self-host open-source models? (Select all that apply)

74% don't self-host. Among those who do, vLLM (9%) and custom inference stacks (9%) lead. Self-hosting was niche, mainly for control or cost reasons.

Which AI application patterns do you use? (Select all that apply)

50% use prompt-based applications; 50% don't customize. Fine-tuning was uncommon—73% don't fine-tune; 16% fine-tune self-hosted and 12% fine-tune managed. Customization was split.

Which frameworks or libraries do you use to build or orchestrate AI applications? (Select all that apply)

58% don't use AI frameworks. LangChain (34%) leads; LlamaIndex (17%) follows. Many relied on custom or ad hoc solutions rather than standardized frameworks.

Do you use any of the following vector databases for LLM-powered applications? (Select all that apply)

59% don't use vector databases. Elasticsearch (21%) leads; Chroma (16%) and Pinecone (12%) follow. pgvector (8%) and Qdrant (7%) have smaller shares. Vector DBs were still emerging.

Do you use any tools to monitor AI/LLM systems in production? (Select all that apply)

74% don't monitor AI systems. W&B (12%) and LangSmith (10%) lead; Evidently AI (5%) follows. Observability was under-adopted.

How do you access or provision GPUs for training/fine-tuning or running LLMs?

55% find GPU provisioning not applicable. Among those who use GPUs, cloud (AWS 39%, Azure 23%, GCP 16%) dominates; 12% use on-premise. Cloud GPUs were preferred.

Do you have a dedicated GenAI/LLM team in your organization?

76% don't have a dedicated GenAI team; AI work was integrated into existing teams. Only 24% had specialized teams.

If you do any fine-tuning of LLMs, which of the following applies?

0 responses