Fabric Data Agent #multilingual
4 items tagged with "multilingual"
π Articles
Mar 14, 2026
Building a Spider2-Inspired Benchmark to Measure the Real Robustness of a Fabric Data Agent in Italian
This article moves from working demos to measurable reliability by introducing a Spider2-inspired benchmark for evaluating a Fabric Data Agent in Italian. It explains why manual spot checks are not enough, and shows how to design a more rigorous evaluation framework that separates already-taught patterns from true generalization. The result is a practical benchmark design for assessing multilingual Fabric Data Agents beyond isolated successful examples.
Mar 4, 2026
Fabric Data Agents Are English-First (For Now): A Hands-On Guide to Configuring One on Zava DIY for Non-English Users
This article provides a hands-on, incremental guide to configuring a Microsoft Fabric Data Agent on the Zava DIY dataset for non-English users, while keeping the agent grounded in an English-first setup. It shows how to improve reliability step by step through data source descriptions, agent instructions, domain constraints, formatting rules, and validated example queries, then extends the configuration with a practical "translate in, translate out" approach. The result is a reproducible quick-win pattern for making the agent more analytics-ready across languages without introducing external translation layers or custom front ends.
Jan 2, 2026
Using Microsoft Fabric Data Agent in Non-English Languages: A Practical Exploration
This article examines what Microsoft Fabric Data Agent's current non-English limitation means in practice, using Italian as a concrete business scenario. Rather than stopping at the official "English-first" guidance, it presents three pragmatic patterns for enabling multilingual experiences today: English instructions with translate-in/translate-out behavior, Copilot Studio as a multilingual front-end, and a translation gateway built around the Data Agent API. The goal is to help teams choose the right architecture for multilingual adoption without overestimating native language support.
Mar 17, 2026
We Built the Benchmark. Now Letβs Evaluate the Fabric Data Agent for Real
This article shows how to move from a benchmark design to a real evaluation workflow for a Microsoft Fabric Data Agent. Starting from a 72-question benchmark built in a previous article for an Italian multilingual scenario, it explains how to complete the ground-truth dataset, run evaluate_data_agent on Fabric, inspect summary and row-level results, and use notebooks to operationalize the full process. A key insight is that part of the observed weakness may come not only from the Data Agent, but also from the evaluation layer itself. By inspecting the SDK source code and testing a stricter custom critic prompt, the article shows how evaluation reliability can improve significantly without changing the agent or the benchmark. Overall, the piece is a practical guide to benchmarking and evaluating Fabric Data Agents more rigorously, especially in multilingual business scenarios.