One of the main reasons companies switch to Power BI is to automate their reports. Automating data extraction is an important step towards reporting automation. If you don’t find a connector for your data source in the “Get Data” menu, your best chance for automatically extracting data is by building Power BI API connectors. APIs allow Power BI to go beyond basic reports, provide real-time data updates, automate data refresh, and connect with custom or third-party data sources.
This article explains how API connections work in Power BI, comparing different methods like custom connectors, push datasets, database pipelines, and third-party integration tools. It also gives a clear plan on how to use APIs for flexible, scalable, and smart business reporting.
You can think of APIs as waiters in a restaurant. Power BI (user) would need to send a request to the API. An API would then request the data from your data source and return it to Power BI in the form of a JSON file.
Every API is different and would have its own documentation. When you are developing a custom Power BI API connector, the job is usually to follow the documentation and create the requests in a way that your API would understand them.
Most API requests are sent using the HTTP protocol. You can therefore use the Web connector inside of Power Query to create the correct HTTP request, including the needed headers and body.
Power BI supports several authentication strategies to ensure secure API access:
Power BI typically ingests data in the following formats:
API rate limits control how often a system can request data within a timeframe. It is often enforced to control traffic and ensure service stability. Power BI won’t throttle automatically if you’re overusing an API; the service itself will reject calls. You’ll need to manage this using:
Here are some common APIs and their rate limits.
API Source | Rate Limit |
Salesforce | 15,000 API calls per 24 hours per org (can vary by edition) |
Google Analytics | 50,000 requests per project per day; 10 queries per second per user |
GitHub API | 60 requests/hour (unauthenticated) or 5,000/hour (authenticated) |
Twitter API | 300 requests/15 minutes for certain endpoints (API v2, depending on tier) |
Power BI custom connectors contain an M code file that sends API requests for individual tables to extract. Microsoft provides an M code SDK that contains the specific functions which are used for writing these connectors. This includes the M functions for creating a navigation menu, sending “get data” requests to the API, etc.
Once your M code is ready, it is usually placed inside the .mez file, which also contains an image to display as a connector logo and a few other resources.
There are also several limitations of this approach that developers should be aware of:
If you have a large volume of data to extract, there is a high likelihood of hitting API limits with this approach. This applies to the connectors you write yourself, but also native connectors and third-party connectors that extract the data from the API to Power BI directly.
This mainly happens due to the duplication of requests from Power BI. Every time that a transformation step is applied in Power Query or a data preview is generated, Power BI sends another request to the API.
Based on our experience, the data refresh speed drops dramatically when sending a large volume of API request using direct Power BI connectors. We have sometimes experienced waiting times of several hours waiting for the data to refresh, only to find that we received a “timeout” error.
Because of this, we only recommend using direct connectors to Power BI when your data volume is small and the API is well-structured.
You can overcome the limitations of direct connectors by extracting your data from the API into an Azure SQL Server database. Azure is a Microsoft cloud platform that allows you to host your SQL databases. It also offers stable integrations with Power BI since all tools are kept in the Microsoft ecosystem.
When we work with Azure, we prefer writing Python code to extract the data from APIs. This code is then placed inside of an Azure Cloud Function that runs on a schedule. When the code executes, it inserts the data into an Azure SQL Server database.
Since API connectors on Azure are not written in M, they don’t have the same limitations and can usually process a larger number of requests.
One limitation that you should be mindful of is the 10-minute run limit on your Azure Cloud Function. If your function runs for longer than 10 minutes, your refresh fails. However, you can work around this by either placing your Python code in a virtual machine or upgrading to a premium Azure Cloud Function.
If you do not want to write your own Power BI API connector, you can often buy one off the shelf. Their designs vary from sending API requests from Power BI to using ODBC connection strings.
At Vidi Corp, we offer our own Power BI connectors that extract the data from a selected number of sources into an Azure SQL Server database. This approach ensures that the heavy work of sending API requests is completed on the database level. Power BI can be used for less heavy data operations of pulling the data from Azure SQL Server and visualising it.
Another benefit of using third-party connectors is that you do not need to code them from scratch and maintain them later. Instead, you can focus on what you do best, which is analysing the data and gaining insights.
Finally, many third-party connectors, including ours, provide free Power BI templates which help to visualise the data and understand the data model. These templates can then be customised to include additional analysis that you need.
Let’s explore some of the custom connectors that we have developed for our customers.
There are two primary strategies for connecting API data to Power BI:
This approach allows Power BI Desktop and Service to communicate directly with the API. The implementation steps are the following:
Instead of loading API data directly into Power BI, push it into a staging database (SQL Server, Snowflake, BigQuery), and connect Power BI to the database.
Let’s review the complete comparison of both approaches in the following table:
Creating a Custom Connector | Pushing Data to the Database | |
Approach | A packaged connector (.mez file) that connects directly to an API via Power BI | External pipeline (ETL/ELT) pushes API data into a database queried by Power BI |
Data Flow | Power BI ↔ API | API → ETL Process → Database → Power BI |
Ideal For | Lightweight APIs, small-to-medium datasets, interactive use | Large/complex datasets, historical data, frequent updates |
Authentication Handling | Handled within connector (OAuth, tokens) | Handled in pipeline (code/scripts or orchestration tool) |
Data Refresh | Scheduled refresh via gateway or service | No data gateway is needed |
Historical Data Handling | The whole dataset is refreshed every time | It is possible to write custom refresh logic such as refreshing the dataset with only the updated data |
Cost | Lower – Uses Power BI infrastructure | Additional costs of Azure functions and Azure SQL Server start from $20 per month. |
Example Tools | Power Query SDK, Visual Studio | Python, Azure Data Factory, Azure Cloud Functions, SQL Server |
Security & Governance | Controlled in Power BI (with limitations) | Stronger, it is possible to restrict access to data to certain IPs and protect it using encryption keys |
Performance on Large Volumes | Limited – Slower on large API responses | High – pre-aggregated and optimized in DB layer |
Real-Time Capability | Limited to Power BI refresh schedule | Can support near-real-time with streaming pipelines |
VIDI has developed various data engineering solutions for clients across industries, helping them streamline data processing, automate workflows, and enhance business intelligence.
For example, VIDI worked with Delta Airlines to optimise their data pipelines, ensuring real-time analytics for flight operations and customer insights. By automating data ingestion and transformation, they improved efficiency and reduced manual intervention.
Similarly, American Express leveraged VIDI’s expertise to integrate multiple financial data sources into a unified reporting system. This allowed for better fraud detection, transaction monitoring, and customer analytics.
For Kardex, a logistics and warehouse automation company, VIDI developed scalable data solutions to track inventory movement, optimise supply chain operations, and enhance predictive analytics.
These real-world applications demonstrate how VIDI’s data engineering services help businesses automate processes, improve decision-making, and gain a competitive edge..
Let’s go over some of the best practices:
Let’s check some common issues and their solutions when working with custom APIs.
Issue | Fix |
API timeout or 429 errors | Use retries with delay, reduce request size |
OAuth token expiry | Implement automatic refresh logic in the connector or pipeline |
Complex JSON structure | Flatten JSON using Power Query or Python |
Gateway refresh failures | Ensure correct credentials, avoid dynamic schema in connector |
In this article, we have covered important topics like API authentication, types of connectors, integration methods, and scalable data engineering. By using these techniques, businesses can improve operations, make faster decisions, and gain a competitive edge.
VIDI offers expert solutions for Power BI, including custom connectors, API integration, and large-scale data automation. With deep industry experience, their team helps businesses set up and optimise Power BI for better performance.
If you need help improving your Power BI setup or integrating data smoothly, contact us for personal support and consultation.
Yes. Power BI can connect to REST APIs using Power Query (M) scripts, native connectors (like OData), or through custom connectors. For secure and scalable API access, custom connectors or data pipelines are recommended.
Power BI supports:
– Native connectors: SQL, Excel, SharePoint, Salesforce, Google Analytics
– Online services: Dataverse, GitHub, Dynamics 365
– File formats: JSON, XML, CSV, PDF
– Custom connectors: MEZ files built using Power Query SDK
DEaaS is a managed service where experts build and maintain your data pipelines. It includes Data ingestion from APIs and systems, Transformation and modeling, Loading into data warehouses and Ongoing automation, monitoring, and scalability