How to Use Automata AI

Getting started is simple. Follow these steps to generate your first optimized edge AI model.

1 Create Your Account

Sign up for a free account to access the Automata AI platform. Choose a plan that fits your needs, from our free tier for experimentation to enterprise options for production deployments.

2 Prepare Your Dataset

Upload your training data in supported formats such as images, sensor readings, or time-series data. Our platform accepts common formats including CSV, images (JPEG, PNG), and binary sensor data. Ensure your data is properly labeled and organized.

3 Define Your Hardware Target

Specify your target device's constraints to ensure the deployed model fits and performs optimally:

  • Memory Limits: Define maximum Flash (storage) and RAM usage (e.g., <256KB for Cortex-M4).
  • Latency: Set the maximum allowed inference time (e.g., <100ms for real-time audio).
  • Power Budget: Specify active power consumption limits (mW) for battery-operated devices.
  • Architecture: Select specific cores (ARM Cortex-M, RISC-V) to enable hardware-specific optimizations (CMSIS-NN, DSP).
Click to explore hardware families

4 Configure Your Model

Select your pipeline type and fine-tune the preprocessing parameters for your data:

Audio Pipeline
  • Sampling Rate: 16kHz (voice) to 44.1kHz (music).
  • Preprocessing: MFCCs, MFE, or Spectrograms for feature extraction.
  • Windowing: Configurable frame size and stride (e.g., 20ms/10ms).
Image Pipeline
  • Resolution: 96x96 to 320x320 pixels (impacts size/speed).
  • Color Mode: Grayscale (1-channel) or RGB (3-channel).
  • Backbones: MobileNetV1/V2, ResNet, or reduced-depth custom CNNs.
Sensor Pipeline
  • Input Axes: 3-axis (Accel) to 9-axis (IMU + Mag).
  • Window Size: Time duration per inference (e.g., 2000ms).
  • Features: Spectral analysis (FFT), peaks, RMS, and zero-crossing rate.

5 Let Automata AI Work

Our automated pipeline explores model architectures, trains and optimizes multiple candidates, applies hardware-specific optimizations, and validates performance against your constraints. This process typically takes minutes to hours depending on complexity.

6 Download and Deploy

Review model performance metrics and accuracy benchmarks. Download your optimized model in your preferred format (TensorFlow Lite, ONNX, C arrays, etc.). Deploy to your device using our integration guides and example code.

Best Practices

For best results, ensure your training dataset is representative of real-world conditions, start with relaxed constraints and tighten as needed, use our validation tools to test models before deployment, and monitor model performance in production and retrain when necessary.