Data connections are the foundation everything else sits on. Get them right first — once your data sources are connected and returning results, the reporting work moves quickly. This chapter covers the four main connection paths in DashboardFox and the specific details that catch people off guard on each one.

Native Database Connections

DashboardFox connects natively to SQL Server, PostgreSQL, and MySQL without additional drivers. For most teams coming from Power BI or Tableau and connecting to one of these databases, this is the path to use. You'll need the server address or hostname, the database name, and credentials for a database user with read access to the tables you want to report on.

The most common first-day blocker is firewall access. DashboardFox connects from a static egress IP — all plans include a static IP address specifically for this purpose. If your database restricts inbound connections by IP (which is standard practice for production databases), you need to add the DashboardFox egress IP to your allowlist before connection attempts will succeed. Get this done before your first connection attempt rather than debugging what looks like a credentials problem.

A few connection settings worth knowing about: you can configure connection timeouts, which matters if your queries run long or your database is on a slower network path. For SQL Server and MySQL, stored procedures can be enabled per datasource if you want to expose them to report builders — this is off by default. You can also control whether users are allowed to write direct SQL queries against a datasource, or whether they're restricted to the structured App Builder interface. Direct SQL access is powerful for analysts but should be considered carefully for any data that needs access controls.

ODBC Connections

For databases not covered by the native connectors — Oracle, IBM DB2, Snowflake, and many others — DashboardFox uses ODBC. This path requires two steps that the native connectors don't: installing the appropriate ODBC driver on the DashboardFox server, and setting up a DSN (Data Source Name) that defines the connection parameters. Both steps happen in the Integrations section of the admin panel.

The ODBC path is more setup work than a native connection, but it covers a wide range of databases that organizations use. If you're migrating from a tool that used a proprietary connector to an ODBC-compatible database, the connection credentials and parameters you used there translate to the ODBC DSN configuration.

Excel and CSV Imports

If part of your current reporting is driven by Excel data — either as the primary data source or as a lookup table — DashboardFox handles Excel and CSV imports as a managed data source. A few specifics that differ from how other tools handle uploads:

Structure your Excel file before you import it. The first row becomes the column headers and those headers become field names in DashboardFox. Get them right on import — once a datasource is created from a spreadsheet, you can add new columns and new worksheets to future updates, but you can't rename existing columns without recreating the datasource. Clean column names (no special characters, sensible display names) save time later.

Data types need to be correct in the source file. If a column contains dates but Excel is storing them as text, DashboardFox will import them as text and date operations won't work correctly. Fix data types in the source before importing rather than trying to cast them afterward. The same applies to numeric columns that Excel might be treating as text.

Each worksheet in the Excel file becomes its own report type (category) in DashboardFox's App Builder. If you want to combine data from multiple worksheets, you'll need to set up a relationship join between the resulting report types. When you upload updates, you can either append new rows or replace the full dataset — the choice determines how you structure your ongoing data management process.

Once your data source is connected, DashboardFox gives you direct access to your tables and the ability to start building reports immediately. The trial is the fastest way to verify your specific connection works before committing to the migration.

API Endpoint Connections

DashboardFox can fetch data from REST API endpoints — useful for pulling from SaaS applications that expose an API, from Google Sheets, or from any service that returns JSON. The setup process is intentionally close to how you'd test an API in Postman: you specify the endpoint URL, the HTTP method, request headers, query parameters or request body, and the authentication method (OAuth2, API key, or basic HTTP auth).

The practical recommendation is to test your API call in Postman (or a similar tool) and confirm it returns the expected JSON before attempting to set it up in DashboardFox. Once you have a working request in Postman, translating it to DashboardFox's interface is straightforward. Trying to debug API authentication inside an unfamiliar BI tool interface adds unnecessary complexity.

DashboardFox takes the JSON response and flattens it into relational tables. Nested JSON objects become joined tables that you can relate in the App Builder. One important note on data types: JSON doesn't carry type information, so all columns from an API-sourced table initially come in as text. You'll need to apply casts in the App Builder's formula fields — converting date strings to actual date types, text numbers to numeric types — before you can use them in calculations or date functions.

API fetches run on a schedule you set in DashboardFox. Each scheduled fetch replaces the full contents of the table. For append-style data accumulation from an API, you'd need to handle that at the API or data layer rather than relying on DashboardFox's fetch behavior.

After Your Connection Is Set Up

Once a data source is connected and returning results, your next step depends on your user base. If you have analysts who will be writing their own SQL, you can give them direct query access and they can start immediately — the next chapter explains how. For teams where non-technical users need to be able to build their own reports, the connection is just the foundation: you'll build an App (the semantic layer) that exposes the right fields in a structured, SQL-free interface. That's the more powerful path for broad adoption, and it's what Chapter 5 covers.