Language Features

Everything you need to build fast, safe, and energy-efficient software.

Core Language

Ownership & Borrowing

Memory safety without garbage collection. Compile-time verification of memory access patterns.

let data = Vec::new();
let ref1 = &data;      // Borrow
let ref2 = &data;      // Multiple borrows OK
// let mut_ref = &mut data; // Error!

Pattern Matching

Exhaustive pattern matching with guards. Destructure any data type elegantly.

match result {
    Ok(value) if value > 0 => process(value),
    Ok(_) => handle_zero(),
    Err(e) => log_error(e),
}

Generics & Traits

Zero-cost generics with trait bounds. Monomorphization for maximum performance.

trait EnergyAware {
    fn energy_cost(&self) -> Joules;
}

fn optimize<T: EnergyAware>(item: T) { ... }

Macros

Hygienic procedural macros for metaprogramming. Generate code at compile time.

#[derive(EnergyProfile)]
struct NeuralNetwork {
    layers: Vec<Layer>,
    weights: Tensor,
}

Energy Features

Energy Budgets

Set energy limits for code blocks. The compiler optimizes to meet your constraints.

energy_budget(100.millijoules) {
    // Compiler ensures this stays within budget
    process_batch(data);
}

Hardware Telemetry

Direct access to RAPL counters, thermal sensors, and power states.

let monitor = EnergyMonitor::start();
compute();
let metrics = monitor.stop();
print("Used: {} mJ", metrics.energy_mj);

Thermal Awareness

Automatic code path selection based on CPU temperature and throttling state.

thermal_adapt {
    hot => use_efficient_algorithm(),
    cool => use_fast_algorithm(),
}

Energy Profiling

Built-in profiler shows energy consumption per function, loop, and expression.

$ joulec profile app.joule
main         12.4 mJ (100%)
  compute    10.2 mJ (82%)
  io          2.2 mJ (18%)

Heterogeneous Computing

GPU (CUDA/ROCm)

Write GPU kernels in Joule syntax. Automatic memory transfer optimization.

TPU (XLA)

Target Google TPUs through XLA backend. Optimized for ML workloads.

NPU (Apple/Qualcomm)

Neural engine support for edge inference. Maximum efficiency on mobile.