Skip to content
_CORE
AI & Agentic Systems Core Information Systems Cloud & Platform Engineering Data Platform & Integration Security & Compliance QA, Testing & Observability IoT, Automation & Robotics Mobile & Digital Banking & Finance Insurance Public Administration Defense & Security Healthcare Energy & Utilities Telco & Media Manufacturing Logistics & E-commerce Retail & Loyalty
References Technologies Blog Know-how Tools
About Collaboration Careers
CS EN DE
Let's talk

Function Calling and Tool Use — LLMs as Real-World Orchestrators

10. 04. 2024 Updated: 27. 03. 2026 1 min read CORE SYSTEMSai
Function Calling and Tool Use — LLMs as Real-World Orchestrators

An LLM generates text. With function calling, it generates actions. The model calls a calendar API with parameters. No parsing — structured output directly from the model.

How It Works

You define tools as JSON schemas. The model returns a structured function call with arguments. The application executes it and returns the result to the model.

Design Principles

  • Clear descriptions: The model decides based on the description
  • Atomic functions: One tool = one action
  • Input validation: Never trust parameters without validation
  • Idempotence: A double call must not cause problems

Security

Prompt injection can trigger unauthorized API calls. Add a confirmation step for destructive actions, an allow-list, and rate limiting.

Production

A tool-use agent for helpdesk: 8 tools. After 3 months: 60% of tickets resolved without human intervention.

Function Calling Is the Bridge Between AI and Action

Invest in tool design the same way you invest in API design — it’s equally important.

function callingtool useai agentsapi
Share:

CORE SYSTEMS

We build core systems and AI agents that keep operations running. 15 years of experience with enterprise IT.

Need help with implementation?

Our experts can help with design, implementation, and operations. From architecture to production.

Contact us
Need help with implementation? Schedule a meeting