Quickstart
What is Flapi?
Flapi is a lightweight framework that helps you create REST APIs from SQL queries without writing any code. It's particularly useful when you need to serve data from files or databases to applications through a standardized API interface.
Quick Start Example
Let's solve a common problem: You have customer data in a Parquet file and want to serve it through a REST API with proper validation and filtering capabilities.
1. Get Flapi
You can either download the binary:
curl -L https://github.com/datazoode/flapi/releases/latest/download/flapi -o flapi
chmod +x flapi
Or use Docker:
docker pull ghcr.io/datazoode/flapi:latest
2. Create Minimal Configuration
Create a flapi.yaml
file:
project_name: customer-api
project_description: API for customer data
template:
path: './sqls'
connections:
customers-parquet:
properties:
path: './data/customers.parquet'
duckdb:
access_mode: READ_WRITE
3. Create Your First Endpoint
- Create the SQL templates directory:
mkdir sqls
- Create an endpoint configuration (
sqls/customers.yaml
):
url-path: /customers/
request:
- field-name: id
field-in: query
description: Customer ID
required: false
validators:
- type: int
min: 1
template-source: customers.sql
connection:
- customers-parquet
- Create the SQL template (
sqls/customers.sql
):
SELECT * FROM customers
WHERE 1=1
{{#params.id}}
AND customer_id = {{{ params.id }}}
{{/params.id}}
4. Run Flapi
Using the binary:
./flapi -c flapi.yaml
Or with Docker:
docker run -it --rm -p 8080:8080 -v $(pwd):/config \
ghcr.io/datazoode/flapi -c /config/flapi.yaml
Your API is now available at http://localhost:8080/customers/
!
Why Flapi?
Data teams often face challenges when sharing data with applications and services. Traditional approaches require:
- Writing custom API code
- Implementing data validation
- Setting up authentication
- Managing database connections
- Handling error cases
- Writing documentation
Flapi solves these challenges by providing:
Declarative API Definition
- Define APIs using YAML configuration
- Automatic parameter validation
- Built-in SQL injection prevention
- OpenAPI documentation generation
Powerful Data Access
- Connect to Parquet files, DuckDB, and BigQuery
- SQL templating with Mustache
- Query result caching
- Connection pooling
Enterprise Features
- Authentication (Basic Auth)
- Rate limiting
- CORS support
- HTTPS enforcement
- Health checks
Main Features
Data Sources
- File Formats: Direct connection to Parquet files
- Databases: DuckDB and BigQuery support
- Extensible: Plugin system for additional data sources
API Features
- Parameter Validation: Type checking, ranges, enums, regex
- SQL Templates: Mustache templating for dynamic queries
- Caching: Query result caching with DuckDB
- Authentication: Basic auth with user roles
- Rate Limiting: Configurable request limits
Developer Experience
- Zero Code: Create APIs with just YAML and SQL
- OpenAPI: Automatic API documentation
- Docker Support: Easy deployment with containers
- Monitoring: Built-in health checks and metrics
Next Steps
Now that you have your first API endpoint running, you can:
- Add more complex validations to your endpoints
- Configure authentication for secure access
- Connect to different data sources like BigQuery
- Set up rate limiting and monitoring
Check out the rest of our documentation to learn more about these features.