Orlando
Transform transformations, not data. Compositional data processing via WebAssembly.
Orlando brings the power of transducers to JavaScript and TypeScript through a blazing-fast Rust/WebAssembly implementation. Named after the bridger characters in Greg Egan's Diaspora, who embodied transformation at fundamental levels.
What Are Transducers?
Transducers compose transformations, not data.
Traditional JavaScript array methods create intermediate arrays at each step:
// Traditional approach - creates 2 intermediate arrays
const result = data
.map(x => x * 2) // intermediate array 1
.filter(x => x > 10) // intermediate array 2
.slice(0, 5); // final result
// For 1M items, this allocates ~24MB of intermediate memory
Orlando transducers execute transformations in a single pass with zero intermediate allocations:
import init, { Pipeline } from 'orlando-transducers';
await init();
const pipeline = new Pipeline()
.map(x => x * 2)
.filter(x => x > 10)
.take(5);
const result = pipeline.toArray(data);
// For 1M items, stops after finding 5 matches
// Memory: ~40 bytes (just the 5-element result)
Key Features
- No intermediate allocations - Single pass over data
- Early termination - Stops processing as soon as possible
- Composable - Build complex pipelines from simple operations
- WASM-powered - Native performance via WebAssembly
- Automatic fusion - Map-Filter chains automatically optimized
- Functional optics - Lens, Prism, Iso, Fold, Traversal for immutable data
- Profunctor encoding - Principled optics composition via Karpal
- Reactive primitives - Signal and Stream types (Rust API)
- Geometric optics - Multivector grade projection and extraction
- Tiny - <50KB compressed WASM bundle
Performance
Real-world benchmarks show 3-19x speedup over native JavaScript array methods:
| Scenario | JavaScript Arrays | Orlando | Speedup |
|---|---|---|---|
| Map - Filter - Take 10 (100K items) | 2.3ms | 0.6ms | 3.8x |
| Complex pipeline (10 ops, 50K items) | 8.7ms | 2.1ms | 4.1x |
| Early termination (find first 5 in 1M items) | 15.2ms | 0.8ms | 19x |
Orlando's architecture is designed around three principles:
- Zero intermediate arrays - Array methods create a new array at each step
- Early termination - Orlando stops processing immediately when conditions are met
- WASM execution - Pre-compiled, consistent native performance
- SIMD optimizations - Vectorized operations for numeric data (when available)
Category Theory Foundation
Transducers are natural transformations between fold functors. A transducer transforms a reducing function:
forall Acc. ((Acc, Out) -> Acc) -> ((Acc, In) -> Acc)
This foundation guarantees:
- Identity law:
id . f = f . id = f - Associativity:
(f . g) . h = f . (g . h)
Orlando's optics hierarchy is built on profunctor encoding via Karpal, providing mathematically principled composition across optic types (Lens, Prism, Iso, Fold, Traversal).
When Should You Use Orlando?
Great for:
- Large datasets (>1000 elements) - More data = bigger performance wins
- Complex pipelines (3+ operations) - Single-pass execution shines
- Early termination scenarios -
take,takeWhile, find first N - Memory-constrained environments - No intermediate allocations
- Reusable transformation logic - Define pipelines once, use many times
Consider array methods for:
- Small datasets (<100 elements) - Overhead may not be worth it
- Single operations -
array.map(fn)is simpler than a pipeline - Prototyping - Array methods are more familiar during development
Getting Started
Installation
npm install orlando-transducers
# or
yarn add orlando-transducers
# or
pnpm add orlando-transducers
Using from CDN
<script type="module">
import init, { Pipeline } from 'https://unpkg.com/orlando-transducers';
await init();
// Use Pipeline...
</script>
Initializing WASM
Orlando uses WebAssembly under the hood. You need to initialize the WASM module once before using any API:
import init, { Pipeline } from 'orlando-transducers';
// Initialize WASM (once per application)
await init();
In a framework context, initialize in your app's entry point:
// main.js / index.js
import init from 'orlando-transducers';
async function bootstrap() {
await init();
// Now all Orlando APIs are ready
startApp();
}
bootstrap();
Your First Pipeline
import init, { Pipeline } from 'orlando-transducers';
await init();
// Create a reusable pipeline
const pipeline = new Pipeline()
.map(x => x * 2)
.filter(x => x % 3 === 0)
.take(5);
// Execute on data
const data = Array.from({ length: 100 }, (_, i) => i + 1);
const result = pipeline.toArray(data);
console.log(result); // [6, 12, 18, 24, 30]
Key concepts:
- Pipelines are reusable - define once, execute on any data
- Fluent API - chain transformations with method calls
- Lazy execution - nothing runs until you call a terminal operation (
.toArray(),.reduce(), etc.) - Early termination -
.take(5)stops processing after 5 results
TypeScript
Orlando works with TypeScript out of the box:
import init, { Pipeline } from 'orlando-transducers';
await init();
interface User {
id: number;
name: string;
email: string;
active: boolean;
}
const activeEmails = new Pipeline()
.filter((user: User) => user.active)
.map((user: User) => user.email)
.take(100);
const emails = activeEmails.toArray(users);
Core Concepts
Transformations vs Collectors
Transformations build up the pipeline:
const pipeline = new Pipeline()
.map(x => x * 2) // transformation
.filter(x => x > 10) // transformation
.take(5); // transformation
Collectors (terminal operations) execute the pipeline and produce a result:
const array = pipeline.toArray(data); // collect to array
const sum = pipeline.reduce(data, (a, b) => a + b, 0); // reduce to value
Pipeline Reuse
A key advantage over array methods is that pipelines are reusable objects:
const normalize = new Pipeline()
.filter(x => x != null)
.map(x => x.trim().toLowerCase())
.filter(x => x.length > 0);
// Use on different datasets
const emails = normalize.toArray(rawEmails);
const names = normalize.toArray(rawNames);
const tags = normalize.toArray(rawTags);
Early Termination
Orlando stops processing the moment it has enough results:
// Only processes ~13 elements out of 1,000,000
const result = new Pipeline()
.map(x => x * 2)
.filter(x => x % 3 === 0)
.take(5)
.toArray(Array.from({ length: 1_000_000 }, (_, i) => i));
This is where Orlando's biggest performance wins come from. Traditional array methods must process the entire array at every step.
Using as a Rust Crate
Orlando is also a first-class Rust library:
[dependencies]
orlando-transducers = "0.5.0"
#![allow(unused)] fn main() { use orlando_transducers::iter_ext::PipelineBuilder; let result = PipelineBuilder::new() .map(|x: i32| x * 2) .filter(|x: &i32| *x > 10) .take(5) .run(1..100); assert_eq!(result, vec![12, 14, 16, 18, 20]); }
Browser Compatibility
Orlando works in all modern browsers with WebAssembly support:
- Chrome 57+
- Firefox 52+
- Safari 11+
- Edge 16+
- Node.js 12+
Pipeline (JavaScript API)
The Pipeline class is the main entry point for building transducer pipelines in JavaScript/TypeScript.
Quick Start
import init, { Pipeline } from 'orlando-transducers';
await init();
const pipeline = new Pipeline()
.map(x => x * 2)
.filter(x => x > 10)
.take(5);
const result = pipeline.toArray(data);
Transformation Methods
All transformation methods return a new Pipeline instance, allowing fluent method chaining.
map(fn)
Transform each value using the provided function.
map(fn: (value: T) => U): Pipeline
new Pipeline()
.map(x => x * 2)
.map(x => x + 1)
.toArray([1, 2, 3]); // [3, 5, 7]
filter(predicate)
Keep only values that match the predicate.
filter(predicate: (value: T) => boolean): Pipeline
new Pipeline()
.filter(x => x % 2 === 0)
.filter(x => x > 10)
.toArray([1, 5, 12, 20, 3]); // [12, 20]
take(n)
Take the first n elements, then stop processing. This is where Orlando's early termination shines.
take(n: number): Pipeline
new Pipeline()
.filter(x => x % 2 === 0)
.take(3)
.toArray([1, 2, 3, 4, 5, 6, 7, 8]); // [2, 4, 6]
takeWhile(predicate)
Take elements while the predicate is true, then stop.
takeWhile(predicate: (value: T) => boolean): Pipeline
new Pipeline()
.takeWhile(x => x < 100)
.toArray([1, 5, 50, 200, 10]); // [1, 5, 50]
drop(n)
Skip the first n elements.
drop(n: number): Pipeline
new Pipeline()
.drop(3)
.toArray([1, 2, 3, 4, 5]); // [4, 5]
dropWhile(predicate)
Skip elements while the predicate is true.
dropWhile(predicate: (value: T) => boolean): Pipeline
new Pipeline()
.dropWhile(x => x < 10)
.toArray([1, 5, 12, 20, 3]); // [12, 20, 3]
flatMap(fn)
Transform and flatten nested arrays.
flatMap(fn: (value: T) => Array<U>): Pipeline
new Pipeline()
.flatMap(x => [x, x * 10])
.toArray([1, 2, 3]); // [1, 10, 2, 20, 3, 30]
tap(fn)
Perform side effects without modifying values. Useful for debugging.
tap(fn: (value: T) => void): Pipeline
new Pipeline()
.tap(x => console.log('Processing:', x))
.map(x => x * 2)
.tap(x => console.log('Result:', x))
.toArray([1, 2, 3]);
reject(predicate)
Remove matching elements (inverse of filter).
reject(predicate: (value: T) => boolean): Pipeline
new Pipeline()
.reject(x => x < 0)
.toArray([-1, 2, -3, 4]); // [2, 4]
chunk(n)
Group elements into arrays of size n.
chunk(n: number): Pipeline
new Pipeline()
.chunk(3)
.toArray([1, 2, 3, 4, 5, 6, 7]); // [[1,2,3], [4,5,6], [7]]
unique()
Remove consecutive duplicate values.
unique(): Pipeline
new Pipeline()
.unique()
.toArray([1, 1, 2, 2, 3, 1]); // [1, 2, 3, 1]
scan(fn, initial)
Accumulate values with intermediate results.
scan(fn: (acc: A, value: T) => A, initial: A): Pipeline
new Pipeline()
.scan((sum, x) => sum + x, 0)
.toArray([1, 2, 3, 4]); // [1, 3, 6, 10]
Pipeline Enhancement Methods
pluck(key)
Extract a single property from each object.
new Pipeline()
.pluck('name')
.toArray([{ name: "Alice" }, { name: "Bob" }]); // ["Alice", "Bob"]
project(keys)
Extract multiple properties from each object.
new Pipeline()
.project(['id', 'name'])
.toArray(users); // [{ id: 1, name: "Alice" }, ...]
compact()
Remove all falsy values (null, undefined, false, 0, '', NaN).
new Pipeline()
.compact()
.toArray([0, 1, null, 2, undefined, 3, '', 4]); // [1, 2, 3, 4]
flatten(depth)
Flatten nested arrays to a given depth.
new Pipeline()
.flatten(2)
.toArray([[1, [2]], [3, [4, [5]]]]); // [1, 2, 3, 4, [5]]
whereMatches(spec)
Filter objects matching a specification pattern.
new Pipeline()
.whereMatches({ active: true, role: 'admin' })
.toArray(users);
Lens Pipeline Methods
viewLens(lens)
Extract the focused value via a lens.
const nameLens = lens('name');
new Pipeline()
.viewLens(nameLens)
.toArray(users); // ["Alice", "Bob", ...]
overLens(lens, fn)
Transform values through a lens.
const priceLens = lens('price');
new Pipeline()
.overLens(priceLens, p => p * 0.9)
.toArray(products); // each product with 10% discount
filterLens(lens, predicate)
Filter by lens-focused value.
const ageLens = lens('age');
new Pipeline()
.filterLens(ageLens, age => age >= 18)
.toArray(users);
setLens(lens, value)
Set a fixed value via a lens on every element.
const statusLens = lens('status');
new Pipeline()
.setLens(statusLens, 'published')
.toArray(posts);
Terminal Operations (Collectors)
These execute the pipeline and return a result.
toArray(source)
Collect all results into an array.
toArray(source: Array<T>): Array<U>
reduce(source, reducer, initial)
Custom reduction with a reducer function.
reduce(source: Array<T>, reducer: (acc: A, value: U) => A, initial: A): A
const sum = new Pipeline()
.map(x => x * 2)
.reduce([1, 2, 3, 4], (acc, x) => acc + x, 0);
// sum: 20
Standalone Collectors
These functions operate independently of the Pipeline:
| Function | Description | Example |
|---|---|---|
find(pipeline, data, pred) | Find first matching element | find(pipeline, data, x => x > 10) |
partition(pipeline, data, pred) | Split into [matching, non-matching] | partition(pipeline, data, isValid) |
groupBy(pipeline, data, keyFn) | Group elements by key | groupBy(pipeline, data, x => x.type) |
frequencies(data) | Count occurrences | frequencies([1, 2, 2, 3]) |
topK(data, k) | Get k largest elements | topK(scores, 10) |
Standalone Functions
Statistical Operations
| Function | Description | Example |
|---|---|---|
product(array) | Multiply all numbers | product([2, 3, 4]) = 24 |
mean(array) | Arithmetic mean | mean([1, 2, 3, 4, 5]) = 3 |
median(array) | Middle value | median([1, 2, 3, 4, 5]) = 3 |
min(array) / max(array) | Min/max value | max([1, 5, 3]) = 5 |
minBy(array, fn) / maxBy(array, fn) | Min/max by key | maxBy(users, u => u.score) |
variance(array) | Sample variance | variance([2, 4, 6, 8]) |
stdDev(array) | Standard deviation | stdDev([2, 4, 6, 8]) |
quantile(array, p) | P-th quantile | quantile(data, 0.95) |
mode(array) | Most frequent value | mode([1, 2, 2, 3]) = 2 |
Collection Utilities
| Function | Description | Example |
|---|---|---|
sortBy(array, fn) | Sort by key | sortBy(users, u => u.age) |
sortWith(array, cmp) | Sort with comparator | sortWith(nums, (a,b) => a - b) |
reverse(array) | Reverse order | reverse([1, 2, 3]) = [3, 2, 1] |
range(start, end, step) | Numeric sequence | range(0, 10, 2) = [0, 2, 4, 6, 8] |
repeat(value, n) | Repeat value | repeat('x', 3) = ['x', 'x', 'x'] |
cycle(array, n) | Cycle array | cycle([1, 2], 3) = [1, 2, 1, 2, 1, 2] |
unfold(seed, fn, limit) | Generate from seed | unfold(1, x => x * 2, 5) |
path(obj, pathArr) | Safe deep access | path(user, ['profile', 'email']) |
pathOr(obj, path, default) | Path with default | pathOr(config, ['port'], 8080) |
evolve(obj, transforms) | Nested transforms | evolve(user, { age: n => n + 1 }) |
Logic Functions
| Function | Description | Example |
|---|---|---|
both(p1, p2) | AND combinator | both(isPositive, isEven) |
either(p1, p2) | OR combinator | either(isSmall, isLarge) |
complement(pred) | NOT combinator | complement(isEven) |
allPass(preds) | All must pass | allPass([isValid, isActive]) |
anyPass(preds) | Any must pass | anyPass([isZero, isDivisibleBy10]) |
When(pred, fn) | Conditional transform | new When(x => x > 0, x => x * 2) |
Unless(pred, fn) | Inverse conditional | new Unless(x => x > 0, _ => 0) |
IfElse(pred, onTrue, onFalse) | Branch | new IfElse(x => x >= 0, double, halve) |
Multi-Input Operations
| Function | Description | Example |
|---|---|---|
merge(arrays) | Interleave arrays | merge([a, b, c]) |
zip(a, b) | Combine into pairs | zip([1,2], ['a','b']) |
zipLongest(a, b, fill) | Zip with fill | zipLongest(a, b, null) |
intersection(a, b) | Common elements | intersection(a, b) |
union(a, b) | Unique from both | union(a, b) |
difference(a, b) | In a, not b | difference(a, b) |
symmetricDifference(a, b) | In one, not both | symmetricDifference(a, b) |
cartesianProduct(a, b) | All pairs | cartesianProduct(colors, sizes) |
takeLast(array, n) | Last N elements | takeLast([1,2,3,4,5], 3) |
dropLast(array, n) | Drop last N | dropLast([1,2,3,4,5], 2) |
aperture(array, n) | Sliding windows | aperture([1,2,3,4], 3) |
Optics API
Orlando provides a complete hierarchy of functional optics for immutable, composable data access and transformation.
Overview
| Optic | Focus | Read | Write | Use Case |
|---|---|---|---|---|
| Lens | Exactly one value | Yes | Yes | Object properties, nested fields |
| Optional | Zero or one value | Yes | Yes | Nullable fields, partial data |
| Prism | Zero or one (sum type) | Yes | Yes (construct) | Tagged unions, enum variants |
| Iso | Exactly one (bidirectional) | Yes | Yes | Unit conversions, encodings |
| Traversal | Zero or more values | Yes | Yes | Collections, arrays |
| Fold | Zero or more values | Yes | No | Read-only aggregation |
Lens
Focus on exactly one part of a data structure with read/write access.
JavaScript
import { lens, lensPath } from 'orlando-transducers';
// Property lens
const nameLens = lens('name');
nameLens.get(user); // "Alice"
nameLens.set(user, "Bob"); // { ...user, name: "Bob" }
nameLens.over(user, s => s.toUpperCase()); // { ...user, name: "ALICE" }
// Path lens for deep access
const cityLens = lensPath(['address', 'city']);
cityLens.get(user); // "NYC"
cityLens.set(user, "Boston"); // deep immutable update
// Composition
const addressLens = lens('address');
const zipLens = lens('zip');
const userZipLens = addressLens.compose(zipLens);
userZipLens.get(user); // "10001"
Rust
#![allow(unused)] fn main() { use orlando_transducers::optics::Lens; let name_lens = Lens::new( |user: &User| user.name.clone(), |user: &User, name: String| User { name, ..user.clone() }, ); let name = name_lens.get(&user); let updated = name_lens.set(&user, "Bob".into()); let shouted = name_lens.over(&user, |n| n.to_uppercase()); // Composition via then() let user_city = address_lens.then(&city_lens); }
Lens Laws
All Orlando lenses satisfy:
- GetPut:
set(s, get(s)) = s - PutGet:
get(set(s, a)) = a - PutPut:
set(set(s, a1), a2) = set(s, a2)
Optional
Like a Lens, but the focus may not exist. Safe for nullable or missing fields.
JavaScript
import { optional } from 'orlando-transducers';
const phoneLens = optional('phone');
phoneLens.get(user); // undefined (missing field)
phoneLens.getOr(user, "N/A"); // "N/A" (with default)
phoneLens.set(user, "555-0100"); // { ...user, phone: "555-0100" }
phoneLens.over(user, normalize); // no-op if undefined
Rust
#![allow(unused)] fn main() { use orlando_transducers::optics::Optional; let phone = Optional::new( |u: &User| u.phone.clone(), |u: &User, p: String| User { phone: Some(p), ..u.clone() }, ); let val = phone.get_or(&user, "N/A".into()); }
Prism
Focus on one variant of a sum type. Can both match (preview) and construct (review).
JavaScript
import { prism } from 'orlando-transducers';
const somePrism = prism(
x => x.tag === 'Some' ? x.value : undefined, // preview
v => ({ tag: 'Some', value: v }) // review
);
somePrism.preview({ tag: 'Some', value: 42 }); // 42
somePrism.preview({ tag: 'None' }); // undefined
somePrism.review(42); // { tag: 'Some', value: 42 }
Rust
#![allow(unused)] fn main() { use orlando_transducers::optics::Prism; let some_prism = Prism::new( |opt: &Option<i32>| *opt, |val: i32| Some(val), ); assert_eq!(some_prism.preview(&Some(42)), Some(42)); assert_eq!(some_prism.review(42), Some(42)); }
Iso
Lossless, bidirectional conversion between two types.
JavaScript
import { iso } from 'orlando-transducers';
const tempIso = iso(
c => c * 9/5 + 32, // Celsius -> Fahrenheit
f => (f - 32) * 5/9 // Fahrenheit -> Celsius
);
tempIso.to(100); // 212
tempIso.from(212); // 100
tempIso.reverse().to(212); // 100
Rust
#![allow(unused)] fn main() { use orlando_transducers::optics::Iso; let celsius_fahrenheit = Iso::new( |c: &f64| c * 9.0 / 5.0 + 32.0, |f: &f64| (f - 32.0) * 5.0 / 9.0, ); // Can be used as either a Lens or a Prism let as_lens = celsius_fahrenheit.as_lens(); let as_prism = celsius_fahrenheit.as_prism(); }
Traversal
Focus on zero or more values within a structure. Supports reading all and updating all.
JavaScript
import { traversal } from 'orlando-transducers';
const itemsTraversal = traversal(
arr => arr,
(arr, fn) => arr.map(fn)
);
itemsTraversal.getAll([1, 2, 3]); // [1, 2, 3]
itemsTraversal.overAll([1, 2, 3], x => x * 2); // [2, 4, 6]
itemsTraversal.setAll([1, 2, 3], 0); // [0, 0, 0]
Rust
#![allow(unused)] fn main() { use orlando_transducers::optics::Traversal; let each = Traversal::new( |v: &Vec<i32>| v.clone(), |v: &Vec<i32>, f: &dyn Fn(&i32) -> i32| v.iter().map(f).collect(), ); let doubled = each.over_all(&vec![1, 2, 3], |x| x * 2); // [2, 4, 6] }
Fold
Read-only traversal for extracting and aggregating values.
JavaScript
import { fold } from 'orlando-transducers';
const valuesFold = fold(obj => Object.values(obj));
valuesFold.getAll({ a: 1, b: 2, c: 3 }); // [1, 2, 3]
valuesFold.isEmpty({}); // true
valuesFold.length({ a: 1, b: 2 }); // 2
valuesFold.first({ a: 1, b: 2 }); // 1
Rust
#![allow(unused)] fn main() { use orlando_transducers::optics::Fold; let items = Fold::fold_of(|v: &Vec<i32>| v.clone()); items.any(&data, |x| *x > 10); // true if any > 10 items.all(&data, |x| *x > 0); // true if all > 0 items.find(&data, |x| *x > 5); // Some(first > 5) items.is_empty(&data); // bool items.length(&data); // usize items.first(&data); // Option<i32> }
Cross-Type Conversions
Optics can be widened to more general types:
| From | To | Method |
|---|---|---|
| Lens | Traversal | .to_traversal() |
| Lens | Fold | .to_fold() |
| Prism | Traversal | .to_traversal() |
| Prism | Fold | .to_fold() |
| Iso | Lens | .as_lens() |
| Iso | Prism | .as_prism() |
| Traversal | Fold | .as_fold() |
Composition
All optics support composition for deeper access:
// JavaScript
const userCity = addressLens.compose(cityLens);
#![allow(unused)] fn main() { // Rust let user_city = address_lens.then(&city_lens); let deep_fold = outer_fold.then(&inner_fold); let nested = outer_traversal.then(&inner_traversal); }
Profunctor Optics API
Orlando's optics use a profunctor encoding via Karpal for principled composition and cross-type conversions.
Profunctor Constraints
Each optic type corresponds to a constraint on a profunctor:
| Optic | Constraint | Meaning |
|---|---|---|
| Lens | Strong | Focus through products (structs/tuples) |
| Prism | Choice | Focus through sums (enums/variants) |
| Iso | Profunctor | Only needs dimap (weakest) |
| Traversal | Traversing | Focus through collections |
transform() Method
Each optic exposes its profunctor encoding:
#![allow(unused)] fn main() { use orlando_transducers::optics::{Lens, Prism, Iso, Traversal}; // Lens -> Strong profunctor let strong = lens.transform(); // Prism -> Choice profunctor let choice = prism.transform(); // Iso -> Profunctor (weakest constraint) let prof = iso.transform(); // Traversal -> Traversing profunctor let traversing = traversal.transform(); }
Karpal Profunctor Traits
Re-exported from Karpal for use with Orlando's optics.
Profunctor
The base trait. Supports dimap(f, g) for mapping over both input and output.
#![allow(unused)] fn main() { use orlando_transducers::profunctor::Profunctor; // dimap transforms both sides of a profunctor // p.dimap(f, g) where f: B -> A, g: C -> D gives P<B, D> from P<A, C> }
Strong
Extends Profunctor with product operations. Used by Lens.
#![allow(unused)] fn main() { use orlando_transducers::profunctor::Strong; // first(): P<A, B> -> P<(A, C), (B, C)> // second(): P<A, B> -> P<(C, A), (C, B)> }
Choice
Extends Profunctor with sum operations. Used by Prism.
#![allow(unused)] fn main() { use orlando_transducers::profunctor::Choice; // left(): P<A, B> -> P<Either<A, C>, Either<B, C>> // right(): P<A, B> -> P<Either<C, A>, Either<C, B>> }
Traversing
Extends Strong with collection operations. Used by Traversal.
#![allow(unused)] fn main() { use orlando_transducers::profunctor::Traversing; // wander(): applies a profunctor across multiple foci }
Concrete Profunctor Types
| Type | Description | Use Case |
|---|---|---|
FnP<A, B> | Function arrow A -> B | Getting and setting (modify) |
ForgetF<R, A, B> | Forgets B, extracts R from A | Read-only access (getters, folds) |
TaggedF<A, B> | Forgets A, produces B | Write-only access (review/construct) |
Re-exported Types
#![allow(unused)] fn main() { use orlando_transducers::profunctor::{ Profunctor, Strong, Choice, Traversing, FnP, ForgetF, TaggedF, Monoid, }; // Also available from lib.rs: use orlando_transducers::{Getter, Setter, Review}; }
Composition with then()
Compose optics while preserving profunctor constraints:
#![allow(unused)] fn main() { // Lens + Lens = Lens (both Strong) let user_city = address_lens.then(&city_lens); // Fold + Fold = Fold let all_names = users_fold.then(&name_fold); // Traversal + Traversal = Traversal (both Traversing) let nested = outer.then(&inner); }
Cross-Type Conversions
The hierarchy flows from specific to general:
Iso (Profunctor — weakest constraint)
├── Lens (Strong)
└── Prism (Choice)
├── Traversal (Traversing)
└── Fold (read-only)
#![allow(unused)] fn main() { let traversal = lens.to_traversal(); let fold = lens.to_fold(); let fold = prism.to_fold(); let lens = iso.as_lens(); let prism = iso.as_prism(); let fold = traversal.as_fold(); }
Fold Aggregation
Folds support rich queries over focused values:
#![allow(unused)] fn main() { let items = Fold::fold_of(|v: &Vec<i32>| v.clone()); items.any(&data, |x| *x > 10); // bool items.all(&data, |x| *x > 0); // bool items.find(&data, |x| *x > 5); // Option<i32> items.is_empty(&data); // bool items.length(&data); // usize items.first(&data); // Option<i32> }
Storage Model
Orlando uses Rc<dyn Fn> for optic closures, enabling:
- Cloning - Optics can be freely cloned and shared
- Composition - Both sides of a composition can reference the same optic
ComposedLens<S, A>is simply a type alias forLens<S, A>
The Rc overhead is negligible, and WASM is single-threaded so Send/Sync are not required.
Geometric Optics API
Operations on multivector coefficient arrays for geometric algebra. These work on plain &[f64] (Rust) or Float64Array (JavaScript).
In geometric algebra, a multivector with n dimensions has 2^n coefficients, one per basis blade, organized by grade (scalar = 0, vectors = 1, bivectors = 2, etc.).
Grade Inspection
bladeGrade(index) / blade_grade(index)
Compute the grade of a basis blade from its index (popcount).
bladeGrade(0); // 0 (scalar)
bladeGrade(1); // 1 (e1)
bladeGrade(3); // 2 (e12)
bladeGrade(7); // 3 (e123)
bladesAtGradeCount(dimension, grade) / blades_at_grade_count(dimension, grade)
Number of basis blades at a given grade (binomial coefficient).
bladesAtGradeCount(3, 0); // 1 (scalar)
bladesAtGradeCount(3, 1); // 3 (vectors: e1, e2, e3)
bladesAtGradeCount(3, 2); // 3 (bivectors: e12, e13, e23)
bladesAtGradeCount(3, 3); // 1 (pseudoscalar: e123)
gradeIndices(dimension, grade) / grade_indices(dimension, grade)
Get coefficient array indices for all blades at a given grade.
gradeIndices(3, 1); // [1, 2, 4] (indices of e1, e2, e3)
gradeIndices(3, 2); // [3, 5, 6] (indices of e12, e13, e23)
Grade Extraction and Projection
gradeExtract(dimension, grade, mv) / grade_extract(dimension, grade, coefficients)
Extract only the coefficients at a specific grade.
const mv = new Float64Array([1, 2, 3, 4, 5, 6, 7, 8]);
gradeExtract(3, 1, mv); // [2, 3, 5] (vector components)
gradeProject(dimension, grade, mv) / grade_project(dimension, grade, coefficients)
Project onto a single grade, zeroing all others. Returns a full-size multivector.
const mv = new Float64Array([1, 2, 3, 4, 5, 6, 7, 8]);
gradeProject(3, 1, mv); // [0, 2, 3, 0, 5, 0, 0, 0]
gradeProjectMax(dimension, maxGrade, mv) / grade_project_max(dimension, max_grade, coefficients)
Project onto all grades up to and including maxGrade.
gradeProjectMax(3, 1, mv); // [1, 2, 3, 0, 5, 0, 0, 0] (scalar + vector)
Grade Analysis
gradeMask(dimension, mv) / grade_mask(dimension, coefficients)
Bitmask indicating which grades have non-zero components. Bit k is set if grade k is present.
const pure_vector = new Float64Array([0, 1, 0, 0, 0, 0, 0, 0]);
gradeMask(3, pure_vector); // 0b010 = 2 (only grade 1)
hasGrade(dimension, grade, mv) / has_grade(dimension, grade, coefficients)
Check if a specific grade has non-zero components.
hasGrade(3, 1, mv); // true
hasGrade(3, 3, mv); // true (e123 component is 8)
isPureGrade(dimension, mv) / is_pure_grade(dimension, coefficients)
Check if the multivector has components at only one grade.
const pure = new Float64Array([0, 1, 2, 0, 3, 0, 0, 0]);
isPureGrade(3, pure); // true (only grade 1)
const mixed = new Float64Array([1, 1, 0, 0, 0, 0, 0, 0]);
isPureGrade(3, mixed); // false (grade 0 + grade 1)
Component Access
componentGet(mv, bladeIndex) / component_get(coefficients, blade_index)
Get a single coefficient by blade index.
componentGet(mv, 1); // coefficient of e1
componentSet(mv, bladeIndex, value) / component_set(coefficients, blade_index, value)
Set a single coefficient. Returns a new array.
const updated = componentSet(mv, 1, 3.14); // set e1 to 3.14
Norms
mvNorm(mv) / norm(coefficients)
Euclidean norm (magnitude) of a multivector.
const v = new Float64Array([0, 3, 4, 0, 0, 0, 0, 0]);
mvNorm(v); // 5
mvNormSquared(mv) / norm_squared(coefficients)
Squared Euclidean norm (avoids square root).
mvNormSquared(v); // 25
mvNormalize(mv) / normalize(coefficients)
Normalize to unit length.
const unit = mvNormalize(v); // [0, 0.6, 0.8, 0, 0, 0, 0, 0]
mvNorm(unit); // 1.0
Algebraic Transformations
mvReverse(dimension, mv) / reverse(dimension, coefficients)
Reversion: grade-dependent sign reversal. Grade k gets factor (-1)^(k(k-1)/2).
const reversed = mvReverse(3, mv);
gradeInvolution(dimension, mv) / grade_involution(dimension, coefficients)
Grade involution: negate odd-grade components.
const involuted = gradeInvolution(3, mv);
Rust Usage
#![allow(unused)] fn main() { use orlando_transducers::geometric_optics::*; let mv = vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0]; // Grade operations let vectors = grade_extract(3, 1, &mv); // [2.0, 3.0, 5.0] let projected = grade_project(3, 2, &mv); // bivector projection let mask = grade_mask(3, &mv); // bitmask of present grades // Norms let n = norm(&mv); let unit = normalize(&mv); // Transformations let rev = reverse(3, &mv); let inv = grade_involution(3, &mv); }
Reactive Primitives API
Orlando provides Signal and Stream types for reactive programming. These are currently Rust-only APIs.
Signal<T>
A time-varying value with automatic change propagation. When a source signal changes, all derived signals update automatically.
Signal::new(value)
Create a signal with an initial value.
#![allow(unused)] fn main() { use orlando_transducers::signal::Signal; let counter = Signal::new(0_i32); }
.get()
Get the current value. Returns Ref<T> (smart pointer).
#![allow(unused)] fn main() { let val = counter.get(); assert_eq!(*val, 0); }
.set(value)
Set a new value, notifying all subscribers.
#![allow(unused)] fn main() { counter.set(42); assert_eq!(*counter.get(), 42); }
.update(f)
Update the value by applying a function.
#![allow(unused)] fn main() { counter.update(|n| n + 1); assert_eq!(*counter.get(), 43); }
.subscribe(f)
Subscribe to value changes. Returns a Subscription that unsubscribes when dropped.
#![allow(unused)] fn main() { let _sub = counter.subscribe(|val| { println!("Counter is now: {}", val); }); counter.set(10); // prints: Counter is now: 10 }
.map(f)
Create a derived signal that auto-updates when the source changes.
#![allow(unused)] fn main() { let celsius = Signal::new(100.0_f64); let fahrenheit = celsius.map(|c| c * 9.0 / 5.0 + 32.0); assert_eq!(*fahrenheit.get(), 212.0); celsius.set(0.0); assert_eq!(*fahrenheit.get(), 32.0); // auto-updated }
.combine(other, f)
Combine two signals into a derived signal.
#![allow(unused)] fn main() { let width = Signal::new(800_u32); let height = Signal::new(600_u32); let area = width.combine(&height, |w, h| w * h); assert_eq!(*area.get(), 480_000); width.set(1920); assert_eq!(*area.get(), 1_152_000); // auto-updated }
.fold(stream, init, f)
Fold a stream's events into this signal's value.
#![allow(unused)] fn main() { use orlando_transducers::stream::Stream; let counter = Signal::new(0_i32); let clicks = Stream::new(); counter.fold(&clicks, 0, |count, _: &()| count + 1); clicks.emit(()); clicks.emit(()); assert_eq!(*counter.get(), 2); }
Stream<T>
A push-based event stream for discrete events.
Stream::new()
Create an empty stream.
#![allow(unused)] fn main() { use orlando_transducers::stream::Stream; let events = Stream::<String>::new(); }
.emit(value)
Push a value to all subscribers.
#![allow(unused)] fn main() { events.emit("hello".into()); }
.subscribe(f)
Listen for events. Returns StreamSubscription that unsubscribes when dropped.
#![allow(unused)] fn main() { let _sub = events.subscribe(|msg| { println!("Received: {}", msg); }); events.emit("test".into()); // prints: Received: test }
.map(f)
Transform each event.
#![allow(unused)] fn main() { let raw = Stream::new(); let upper = raw.map(|s: String| s.to_uppercase()); upper.subscribe(|s| println!("{}", s)); raw.emit("hello".into()); // prints: HELLO }
.filter(pred)
Only pass events matching the predicate.
#![allow(unused)] fn main() { let numbers = Stream::new(); let evens = numbers.filter(|n: &i32| n % 2 == 0); evens.subscribe(|n| println!("Even: {}", n)); numbers.emit(1); // nothing numbers.emit(2); // prints: Even: 2 numbers.emit(3); // nothing numbers.emit(4); // prints: Even: 4 }
.take(n)
Take only the first n events, then stop.
#![allow(unused)] fn main() { let events = Stream::new(); let first3 = events.take(3); first3.subscribe(|v| println!("{}", v)); events.emit(1); // prints: 1 events.emit(2); // prints: 2 events.emit(3); // prints: 3 events.emit(4); // nothing (taken 3 already) }
.merge(other)
Merge two streams into one.
#![allow(unused)] fn main() { let keyboard = Stream::new(); let mouse = Stream::new(); let input = keyboard.merge(&mouse); input.subscribe(|event| handle(event)); keyboard.emit(KeyEvent::Press('a')); mouse.emit(MouseEvent::Click(100, 200)); // Both arrive at the merged subscriber }
.fold(init, f)
Fold events into a Signal, bridging discrete events to continuous state.
#![allow(unused)] fn main() { let measurements = Stream::new(); let sum = measurements.fold(0.0_f64, |acc, val: &f64| acc + val); measurements.emit(10.0); measurements.emit(20.0); assert_eq!(*sum.get(), 30.0); }
Subscription Lifecycle
Subscriptions are cleaned up automatically when dropped:
#![allow(unused)] fn main() { let sig = Signal::new(0); { let _sub = sig.subscribe(|v| println!("{}", v)); sig.set(1); // prints: 1 } // _sub dropped — subscription removed sig.set(2); // no output }
Explicit cleanup:
#![allow(unused)] fn main() { let sub = stream.subscribe(|e| handle(e)); drop(sub); // explicitly unsubscribe }
Rust API
Orlando is a first-class Rust crate with ergonomic iterator extensions, reactive primitives, and a fluent builder API.
Core Transducers
The fundamental building blocks for data transformation pipelines.
The Transducer Trait
#![allow(unused)] fn main() { use orlando_transducers::{Transducer, Map, Filter, Take, Compose}; // Transducers compose transformations, not data let pipeline = Map::new(|x: i32| x * 2) .compose(Filter::new(|x: &i32| *x > 10)) .compose(Take::new(5)); // Execute with a collector let result = orlando_transducers::to_vec(&pipeline, 1..100); // result: [12, 14, 16, 18, 20] }
Available Transducers
| Type | Description | Constructor |
|---|---|---|
Map<F> | Transform each element | Map::new(|x| x * 2) |
Filter<P> | Keep elements matching predicate | Filter::new(|x: &i32| *x > 5) |
Take | Take first N elements (early termination) | Take::new(10) |
TakeWhile<P> | Take while predicate holds | TakeWhile::new(|x: &i32| *x < 100) |
Drop | Skip first N elements | Drop::new(5) |
DropWhile<P> | Skip while predicate holds | DropWhile::new(|x: &i32| *x < 10) |
FlatMap<F> | Transform and flatten | FlatMap::new(|x| vec![x, x*2]) |
Reject<P> | Remove matching elements | Reject::new(|x: &i32| *x < 0) |
Chunk | Group into fixed-size chunks | Chunk::new(3) |
Unique | Remove consecutive duplicates | Unique::new() |
Scan<F, S> | Accumulate with intermediate results | Scan::new(0, |acc, x| acc + x) |
Collectors
Terminal operations that execute a pipeline:
#![allow(unused)] fn main() { use orlando_transducers::*; let pipeline = Map::new(|x: i32| x * 2); let vec_result = to_vec(&pipeline, 1..=5); // [2, 4, 6, 8, 10] let total = sum(&pipeline, 1..=5); // 30 let n = count(&pipeline, 1..=5); // 5 let head = first(&pipeline, 1..=5); // Some(2) let tail = last(&pipeline, 1..=5); // Some(10) let all_pos = every(&pipeline, 1..=5, |x| *x > 0); // true let has_ten = some(&pipeline, 1..=5, |x| *x == 10); // true }
Logic Combinators
#![allow(unused)] fn main() { use orlando_transducers::logic::{When, Unless, IfElse}; // When: transform only when predicate is true let double_positive = When::new(|x: &i32| *x > 0, |x: i32| x * 2); // Unless: transform only when predicate is false let zero_negative = Unless::new(|x: &i32| *x > 0, |_: i32| 0); // IfElse: branch on condition let classify = IfElse::new( |x: &i32| *x >= 0, |x: i32| x * 2, // positive: double |x: i32| x.abs(), // negative: absolute value ); }
TransduceExt Trait
Extension trait that adds .transduce() to any iterator:
#![allow(unused)] fn main() { use orlando_transducers::iter_ext::TransduceExt; use orlando_transducers::{Map, Filter, Take}; let result: Vec<i32> = (1..100) .transduce( Map::new(|x: i32| x * 2) .compose(Filter::new(|x: &i32| *x > 10)) .compose(Take::new(5)) ); assert_eq!(result, vec![12, 14, 16, 18, 20]); }
The TransducedIterator returned by .transduce() is a lazy iterator adapter - it processes elements on demand and supports early termination.
PipelineBuilder
Fluent builder API for constructing transducer pipelines without manual composition:
#![allow(unused)] fn main() { use orlando_transducers::iter_ext::PipelineBuilder; let result = PipelineBuilder::new() .map(|x: i32| x * 2) .filter(|x: &i32| *x > 10) .take(5) .run(1..100); assert_eq!(result, vec![12, 14, 16, 18, 20]); }
Available Builder Methods
| Method | Description |
|---|---|
.map(f) | Transform each element |
.filter(pred) | Keep matching elements |
.take(n) | Take first N elements |
.run(iter) | Execute pipeline on an iterator, collecting to Vec |
Signal<T>
A time-varying value with automatic change propagation. Signals form the foundation of reactive programming in Orlando.
#![allow(unused)] fn main() { use orlando_transducers::signal::Signal; // Create a signal with an initial value let celsius = Signal::new(0.0_f64); // Derived signal that auto-updates when source changes let fahrenheit = celsius.map(|c| c * 9.0 / 5.0 + 32.0); assert_eq!(*fahrenheit.get(), 32.0); celsius.set(100.0); assert_eq!(*fahrenheit.get(), 212.0); // automatically updated }
Signal Methods
| Method | Description |
|---|---|
Signal::new(value) | Create a signal with initial value |
.get() | Get current value (returns Ref<T>) |
.set(value) | Set new value, notifying all subscribers |
.update(f) | Update value via function |
.subscribe(f) | Subscribe to changes, returns Subscription |
.map(f) | Create a derived signal |
.combine(other, f) | Combine two signals into one |
.fold(stream, init, f) | Fold a stream into this signal |
Subscriptions
#![allow(unused)] fn main() { let counter = Signal::new(0); let mut log = Vec::new(); let _sub = counter.subscribe(|val| { println!("Counter changed to: {}", val); }); counter.set(1); // prints: Counter changed to: 1 counter.set(2); // prints: Counter changed to: 2 // Subscription is dropped when _sub goes out of scope }
Combining Signals
#![allow(unused)] fn main() { let width = Signal::new(10.0_f64); let height = Signal::new(5.0_f64); let area = width.combine(&height, |w, h| w * h); assert_eq!(*area.get(), 50.0); width.set(20.0); assert_eq!(*area.get(), 100.0); // auto-updated }
Stream<T>
A push-based event stream for discrete event processing.
#![allow(unused)] fn main() { use orlando_transducers::stream::Stream; let clicks = Stream::new(); // Transform events let doubled = clicks.map(|x: i32| x * 2); // Subscribe to processed events doubled.subscribe(|val| println!("Got: {}", val)); clicks.emit(21); // prints: Got: 42 }
Stream Methods
| Method | Description |
|---|---|
Stream::new() | Create an empty stream |
.emit(value) | Push a value to all subscribers |
.subscribe(f) | Listen for events, returns StreamSubscription |
.map(f) | Transform each event |
.filter(pred) | Only pass matching events |
.take(n) | Take first N events then stop |
.merge(other) | Merge two streams |
.fold(init, f) | Fold into a Signal |
Stream-Signal Bridge
The .fold() method bridges discrete events into continuous signal values:
#![allow(unused)] fn main() { use orlando_transducers::signal::Signal; use orlando_transducers::stream::Stream; let counter = Signal::new(0); let increments = Stream::new(); // Each stream event updates the signal counter.fold(&increments, 0, |acc, _| acc + 1); increments.emit(()); // counter is now 1 increments.emit(()); // counter is now 2 }
Multi-Input Operations
Standalone functions for combining multiple collections:
#![allow(unused)] fn main() { use orlando_transducers::{merge, intersection, difference, union, symmetric_difference}; let a = vec![1, 2, 3, 4]; let b = vec![3, 4, 5, 6]; let merged = merge(vec![a.clone(), b.clone()]); // [1, 3, 2, 4, 3, 5, 4, 6] let common = intersection(a.clone(), b.clone()); // [3, 4] let unique_a = difference(a.clone(), b.clone()); // [1, 2] let all = union(a.clone(), b.clone()); // [1, 2, 3, 4, 5, 6] let exclusive = symmetric_difference(a, b); // [1, 2, 5, 6] }
Statistical Functions
#![allow(unused)] fn main() { use orlando_transducers::collectors::*; let data = vec![2.0, 4.0, 6.0, 8.0]; let avg = mean(&data); // 5.0 let mid = median(&data); // 5.0 let var = variance(&data); // 6.666... let dev = std_dev(&data); // 2.581... let p95 = quantile(&data, 0.95); }
Migration Guide: From Array Methods to Orlando Transducers
A practical guide for converting JavaScript array operations to Orlando transducers.
Table of Contents
- Why Migrate?
- Basic Conversions
- Common Patterns
- Performance Gotchas
- Advanced Patterns
- Troubleshooting
Why Migrate?
Array Methods Create Intermediate Arrays
// ❌ Traditional approach - creates 2 intermediate arrays
const result = data
.map(x => x * 2) // Intermediate array 1
.filter(x => x > 10) // Intermediate array 2
.slice(0, 5); // Final result
Problems:
- Memory allocation for each step
- Full iteration even if you only need first N results
- Garbage collection overhead
Orlando Processes in a Single Pass
// ✅ Orlando approach - single pass, no intermediates
import init, { Pipeline } from 'orlando-transducers';
await init();
const pipeline = new Pipeline()
.map(x => x * 2)
.filter(x => x > 10)
.take(5);
const result = pipeline.toArray(data);
Benefits:
- No intermediate allocations
- Early termination (stops after collecting 5 elements)
- Single pass over data
- WASM-powered performance
Basic Conversions
Map
Before (Array):
const doubled = numbers.map(x => x * 2);
After (Orlando):
const pipeline = new Pipeline()
.map(x => x * 2);
const doubled = pipeline.toArray(numbers);
Filter
Before (Array):
const evens = numbers.filter(x => x % 2 === 0);
After (Orlando):
const pipeline = new Pipeline()
.filter(x => x % 2 === 0);
const evens = pipeline.toArray(numbers);
Map + Filter
Before (Array):
const result = numbers
.map(x => x * 2)
.filter(x => x > 10);
After (Orlando):
const pipeline = new Pipeline()
.map(x => x * 2)
.filter(x => x > 10);
const result = pipeline.toArray(numbers);
Take (slice)
Before (Array):
const first5 = numbers.slice(0, 5);
After (Orlando):
const pipeline = new Pipeline()
.take(5);
const first5 = pipeline.toArray(numbers);
💡 Performance Win: Orlando stops processing after 5 elements. Array methods process everything first, then slice.
Drop (slice)
Before (Array):
const skip3 = numbers.slice(3);
After (Orlando):
const pipeline = new Pipeline()
.drop(3);
const skip3 = pipeline.toArray(numbers);
Find First
Before (Array):
const first = numbers.find(x => x > 100);
After (Orlando):
const pipeline = new Pipeline()
.filter(x => x > 100)
.take(1);
const result = pipeline.toArray(numbers);
const first = result[0]; // or undefined
💡 Performance Win: Orlando stops immediately after finding the first match.
Reduce (Sum)
Before (Array):
const sum = numbers.reduce((acc, x) => acc + x, 0);
After (Orlando):
const pipeline = new Pipeline()
.map(x => x); // or apply transformations
const sum = pipeline.reduce(
numbers,
(acc, x) => acc + x,
0
);
Common Patterns
Pagination
Before (Array):
function paginate(data, page, pageSize) {
const start = (page - 1) * pageSize;
return data.slice(start, start + pageSize);
}
const page2 = paginate(users, 2, 20);
After (Orlando):
function paginate(data, page, pageSize) {
return new Pipeline()
.drop((page - 1) * pageSize)
.take(pageSize)
.toArray(data);
}
const page2 = paginate(users, 2, 20);
💡 Performance Win: Orlando only processes the exact slice needed, not the entire array.
Data Transformation Pipeline
Before (Array):
const activeCompanyEmails = users
.filter(user => user.active)
.map(user => ({
id: user.id,
email: user.email.toLowerCase()
}))
.filter(user => user.email.endsWith('@company.com'))
.map(user => user.email)
.slice(0, 100);
After (Orlando):
const pipeline = new Pipeline()
.filter(user => user.active)
.map(user => ({
id: user.id,
email: user.email.toLowerCase()
}))
.filter(user => user.email.endsWith('@company.com'))
.map(user => user.email)
.take(100);
const activeCompanyEmails = pipeline.toArray(users);
💡 Performance Win:
- Single pass (no intermediate arrays)
- Early termination (stops at 100 emails)
- WASM-powered execution
Search with Multiple Filters
Before (Array):
const searchProducts = (products, filters) => {
return products
.filter(p => p.category === filters.category)
.filter(p => p.price >= filters.minPrice)
.filter(p => p.price <= filters.maxPrice)
.filter(p => p.rating >= filters.minRating)
.filter(p => p.inStock)
.slice(0, filters.limit || 20);
};
After (Orlando):
const searchProducts = (products, filters) => {
const pipeline = new Pipeline()
.filter(p => p.category === filters.category)
.filter(p => p.price >= filters.minPrice)
.filter(p => p.price <= filters.maxPrice)
.filter(p => p.rating >= filters.minRating)
.filter(p => p.inStock)
.take(filters.limit || 20);
return pipeline.toArray(products);
};
Analytics Aggregation
Before (Array):
// Calculate total revenue from purchases
const purchases = events
.filter(e => e.type === 'purchase')
.map(e => e.amount);
const totalRevenue = purchases.reduce((sum, amt) => sum + amt, 0);
After (Orlando):
const pipeline = new Pipeline()
.filter(e => e.type === 'purchase')
.map(e => e.amount);
const totalRevenue = pipeline.reduce(
events,
(sum, amt) => sum + amt,
0
);
Top N with Sorting
Before (Array):
const top10 = products
.filter(p => p.inStock)
.sort((a, b) => b.sales - a.sales)
.slice(0, 10);
After (Orlando):
// Note: Orlando doesn't have built-in sort (sorting requires seeing all data)
// For this pattern, sort BEFORE the pipeline or use a hybrid approach
const sorted = products
.filter(p => p.inStock)
.sort((a, b) => b.sales - a.sales);
const top10 = new Pipeline()
.take(10)
.toArray(sorted);
// Or use array sort, then Orlando for rest of pipeline
const top10 = new Pipeline()
.filter(p => p.inStock)
.toArray(products)
.sort((a, b) => b.sales - a.sales)
.slice(0, 10);
⚠️ Note: Transducers are best for operations that don't require seeing all data at once. For sorting, use array methods or sort before/after the pipeline.
Performance Gotchas
1. Small Datasets (<100 elements)
Array methods may be faster!
// For small data, array methods have less overhead
const small = [1, 2, 3, 4, 5];
// This is fine (overhead is negligible)
const result = small.map(x => x * 2).filter(x => x > 5);
// Orlando overhead may not be worth it for tiny datasets
Rule of thumb: Use Orlando for datasets >1000 elements or complex pipelines.
2. Single Operation
Array methods are simpler for single operations:
// ❌ Overkill for single operation
const doubled = new Pipeline()
.map(x => x * 2)
.toArray(numbers);
// ✅ Just use array method
const doubled = numbers.map(x => x * 2);
Use Orlando when: You have 2+ operations, especially with early termination.
3. Need All Data Anyway
If processing everything, Orlando advantage is smaller:
// If you need all 1M results anyway, Orlando is still faster but less dramatic
const allDoubled = new Pipeline()
.map(x => x * 2)
.toArray(oneMillion);
// vs
const allDoubled = oneMillion.map(x => x * 2);
// Orlando still wins (no intermediate arrays), but margin is smaller
Biggest wins: Early termination scenarios (take, takeWhile, find first).
Advanced Patterns
Reusable Pipelines
Before (Array):
// Have to repeat the chain
const activeUsers1 = users1.filter(u => u.active).map(u => u.email);
const activeUsers2 = users2.filter(u => u.active).map(u => u.email);
After (Orlando):
// Define once, reuse many times
const activeEmailPipeline = new Pipeline()
.filter(u => u.active)
.map(u => u.email);
const activeUsers1 = activeEmailPipeline.toArray(users1);
const activeUsers2 = activeEmailPipeline.toArray(users2);
const activeUsers3 = activeEmailPipeline.toArray(users3);
Debugging with Tap
Before (Array):
const result = data
.map(x => {
console.log('Input:', x);
return x * 2;
})
.filter(x => {
console.log('After map:', x);
return x > 10;
});
After (Orlando):
const pipeline = new Pipeline()
.tap(x => console.log('Input:', x))
.map(x => x * 2)
.tap(x => console.log('After map:', x))
.filter(x => x > 10)
.tap(x => console.log('After filter:', x));
const result = pipeline.toArray(data);
Conditional Pipelines
Before (Array):
let result = data.map(x => x * 2);
if (needsFiltering) {
result = result.filter(x => x > 10);
}
if (limit) {
result = result.slice(0, limit);
}
After (Orlando):
let pipeline = new Pipeline()
.map(x => x * 2);
if (needsFiltering) {
pipeline = pipeline.filter(x => x > 10);
}
if (limit) {
pipeline = pipeline.take(limit);
}
const result = pipeline.toArray(data);
Troubleshooting
"Pipeline is not iterable"
Problem:
// ❌ Won't work
for (const item of pipeline) {
console.log(item);
}
Solution:
Pipelines are not iterables. Use .toArray() to execute:
// ✅ Correct
const result = pipeline.toArray(data);
for (const item of result) {
console.log(item);
}
"Cannot read property of undefined"
Problem:
const pipeline = new Pipeline();
const result = pipeline.toArray(); // ❌ Missing source data
Solution: Always provide source data to terminal operations:
const result = pipeline.toArray(data); // ✅ Provide data
Type Errors in TypeScript
Problem:
const pipeline = new Pipeline()
.map(x => x * 2) // x is 'any'
.filter(x => x.length > 0); // Runtime error if x is number
Solution: Add type annotations to your functions:
const pipeline = new Pipeline()
.map((x: number) => x * 2)
.filter((x: number) => x > 10);
Performance Not Improving
Check:
- Dataset size: Orlando shines on large datasets (>1000 elements)
- Early termination: Are you using
takeortakeWhile? - Complexity: Single operations may not benefit much
- Initialization: Are you reusing pipelines or creating new ones each time?
Good scenario for Orlando:
// Large dataset + complex pipeline + early termination
const result = new Pipeline()
.map(/* expensive operation */)
.filter(/* complex condition */)
.map(/* another transformation */)
.take(10) // Early termination!
.toArray(millionItems);
Not ideal for Orlando:
// Small dataset + single operation
const result = new Pipeline()
.map(x => x * 2)
.toArray([1, 2, 3, 4, 5]);
Summary: When to Use Orlando
✅ Great for:
- Large datasets (>1000 elements)
- Complex pipelines (3+ operations)
- Early termination scenarios (take, takeWhile)
- Reusable transformation pipelines
- Performance-critical code
- Reducing memory allocations
⚠️ Consider array methods for:
- Small datasets (<100 elements)
- Single operations
- Prototyping / quick scripts
- When you need array methods not in Orlando (e.g., sort, reverse)
Immutable Nested Updates with Optics
Orlando's optics replace verbose manual spreading for immutable updates.
Simple Property Update
Before (Spread):
const updated = { ...user, name: "Bob" };
After (Orlando):
import { lens } from 'orlando-transducers';
const nameLens = lens('name');
const updated = nameLens.set(user, "Bob");
Deep Nested Update
Before (Spread):
const updated = {
...state,
user: {
...state.user,
address: {
...state.user.address,
city: "Boston"
}
}
};
After (Orlando):
import { lensPath } from 'orlando-transducers';
const cityLens = lensPath(['user', 'address', 'city']);
const updated = cityLens.set(state, "Boston");
Transform In Place
Before (Spread):
const updated = { ...user, age: user.age + 1 };
After (Orlando):
const ageLens = lens('age');
const updated = ageLens.over(user, age => age + 1);
Nullable Fields with Optional
Before (Manual check):
const phone = user.phone != null ? user.phone : "N/A";
const updated = user.phone != null
? { ...user, phone: normalize(user.phone) }
: user;
After (Orlando):
import { optional } from 'orlando-transducers';
const phoneLens = optional('phone');
const phone = phoneLens.getOr(user, "N/A");
const updated = phoneLens.over(user, normalize); // no-op if undefined
Next Steps
- Read the JavaScript API Documentation
- Try the Interactive Demo
- Run Performance Benchmarks
- Explore Real-World Examples
Data Processing Pipelines
Real-world patterns for building data transformation pipelines with Orlando.
ETL: Extract, Transform, Load
Normalizing User Data
import init, { Pipeline, both } from 'orlando-transducers';
await init();
const normalizeUsers = new Pipeline()
.filter(u => u != null)
.filter(u => u.email != null && u.email.includes('@'))
.map(u => ({
id: u.id,
name: u.name.trim(),
email: u.email.toLowerCase().trim(),
role: u.role || 'user',
createdAt: new Date(u.created_at).toISOString(),
}))
.unique(); // deduplicate consecutive entries
// Reuse on multiple data sources
const fromCsv = normalizeUsers.toArray(csvRecords);
const fromApi = normalizeUsers.toArray(apiResponse.users);
Log Processing
// Parse and filter error logs
const errorPipeline = new Pipeline()
.map(line => {
const [timestamp, level, ...messageParts] = line.split(' ');
return { timestamp, level, message: messageParts.join(' ') };
})
.filter(entry => entry.level === 'ERROR' || entry.level === 'FATAL')
.map(entry => ({
...entry,
timestamp: new Date(entry.timestamp),
}));
const errors = errorPipeline.toArray(logLines);
Analytics Aggregation
Revenue Calculation
const revenuePipeline = new Pipeline()
.filter(event => event.type === 'purchase')
.filter(event => event.status === 'completed')
.map(event => event.amount);
const totalRevenue = revenuePipeline.reduce(
events,
(sum, amount) => sum + amount,
0
);
Top Products by Category
import { Pipeline, sortBy, topK } from 'orlando-transducers';
// Extract and score products
const scoredProducts = new Pipeline()
.filter(p => p.inStock && p.rating >= 3.0)
.map(p => ({
...p,
score: p.rating * Math.log(p.salesCount + 1),
}))
.toArray(products);
// Get top 10 by computed score
const top10 = topK(scoredProducts, 10);
Funnel Analysis
// Count users at each stage of a conversion funnel
const stages = ['visit', 'signup', 'activate', 'purchase'];
const funnelCounts = stages.map(stage => {
const count = new Pipeline()
.filter(event => event.stage === stage)
.unique() // deduplicate by consecutive user
.toArray(events)
.length;
return { stage, count };
});
Pagination
function paginate(data, page, pageSize) {
return new Pipeline()
.drop((page - 1) * pageSize)
.take(pageSize)
.toArray(data);
}
const page2 = paginate(users, 2, 20); // items 21-40
Filtered Pagination
function searchAndPaginate(data, query, page, pageSize) {
const pipeline = new Pipeline()
.filter(item => item.name.toLowerCase().includes(query.toLowerCase()))
.filter(item => item.active)
.drop((page - 1) * pageSize)
.take(pageSize);
return pipeline.toArray(data);
}
Search with Multiple Filters
import { Pipeline, both, allPass } from 'orlando-transducers';
function searchProducts(catalog, filters) {
let pipeline = new Pipeline();
if (filters.category) {
pipeline = pipeline.filter(p => p.category === filters.category);
}
if (filters.minPrice != null) {
pipeline = pipeline.filter(p => p.price >= filters.minPrice);
}
if (filters.maxPrice != null) {
pipeline = pipeline.filter(p => p.price <= filters.maxPrice);
}
if (filters.minRating) {
pipeline = pipeline.filter(p => p.rating >= filters.minRating);
}
if (filters.inStockOnly) {
pipeline = pipeline.filter(p => p.inStock);
}
return pipeline.take(filters.limit || 20).toArray(catalog);
}
const results = searchProducts(catalog, {
category: 'electronics',
minPrice: 50,
maxPrice: 500,
minRating: 4.0,
inStockOnly: true,
limit: 20,
});
Combining Multiple Data Sources
Using Multi-Input Operations
import { Pipeline, intersection, difference, union, merge } from 'orlando-transducers';
// Find users active on both platforms
const mobileUsers = new Pipeline()
.filter(e => e.platform === 'mobile')
.map(e => e.userId)
.toArray(events);
const webUsers = new Pipeline()
.filter(e => e.platform === 'web')
.map(e => e.userId)
.toArray(events);
const crossPlatform = intersection(mobileUsers, webUsers);
const mobileOnly = difference(mobileUsers, webUsers);
const allUsers = union(mobileUsers, webUsers);
Interleaving Data Streams
import { merge, Pipeline } from 'orlando-transducers';
// Process logs from multiple servers
const processLogs = new Pipeline()
.filter(log => log.level === 'error')
.map(log => ({
server: log.source,
message: log.message,
time: new Date(log.timestamp),
}));
const server1Errors = processLogs.toArray(server1Logs);
const server2Errors = processLogs.toArray(server2Logs);
// Interleave for chronological review
const allErrors = merge([server1Errors, server2Errors]);
Debugging Pipelines
Use .tap() to inspect values flowing through the pipeline without modifying them:
const pipeline = new Pipeline()
.tap(x => console.log('[input]', x))
.filter(x => x.active)
.tap(x => console.log('[after filter]', x))
.map(x => x.email.toLowerCase())
.tap(x => console.log('[after map]', x))
.take(5);
const result = pipeline.toArray(users);
Conditional Debugging
const DEBUG = process.env.NODE_ENV === 'development';
function debug(label) {
return DEBUG
? x => console.log(`[${label}]`, x)
: () => {};
}
const pipeline = new Pipeline()
.tap(debug('raw'))
.filter(isValid)
.tap(debug('valid'))
.map(transform)
.tap(debug('transformed'));
Rust: PipelineBuilder for ETL
#![allow(unused)] fn main() { use orlando_transducers::iter_ext::PipelineBuilder; // Extract numeric values, filter outliers, take top results let cleaned: Vec<f64> = PipelineBuilder::new() .map(|record: Record| record.value) .filter(|v: &f64| *v > 0.0 && *v < 1000.0) .take(100) .run(raw_records.into_iter()); }
Rust: Hybrid Composition
#![allow(unused)] fn main() { use orlando_transducers::{Map, Filter, Take, to_vec, intersection}; // Process each dataset independently let pipeline = Map::new(|r: Record| r.user_id) .compose(Filter::new(|id: &u64| *id > 0)); let dataset_a_ids = to_vec(&pipeline, dataset_a); let dataset_b_ids = to_vec(&pipeline, dataset_b); // Find common users let common_users = intersection(dataset_a_ids, dataset_b_ids); }
Optics Composition Patterns
Patterns for combining Orlando's optics with transducer pipelines for expressive, immutable data transformations.
Streaming Lenses: Optics + Transducers
Orlando uniquely combines lenses with transducer pipelines. No other lens library offers this.
Extract, Filter, Transform
import init, { Pipeline, lens, lensPath } from 'orlando-transducers';
await init();
const users = [
{ name: "Alice", profile: { email: "alice@company.com", verified: true }},
{ name: "Bob", profile: { email: "bob@gmail.com", verified: false }},
{ name: "Carol", profile: { email: "carol@company.com", verified: true }},
];
const emailLens = lensPath(['profile', 'email']);
const verifiedLens = lensPath(['profile', 'verified']);
// Pipeline with lens-based extraction
const companyEmails = new Pipeline()
.filterLens(verifiedLens, v => v === true) // filter by lens value
.viewLens(emailLens) // extract via lens
.filter(email => email.endsWith('@company.com'))
.toArray(users);
// Result: ["alice@company.com", "carol@company.com"]
Batch Updates via Pipeline
const products = [
{ id: 1, name: "Widget", price: 10, category: "tools" },
{ id: 2, name: "Gadget", price: 20, category: "tools" },
{ id: 3, name: "Doohickey", price: 15, category: "accessories" },
];
const priceLens = lens('price');
// Apply 20% discount to all items via pipeline
const discounted = new Pipeline()
.overLens(priceLens, price => price * 0.8)
.toArray(products);
// Each product has price * 0.8, originals unchanged
Selective Updates
const categoryLens = lens('category');
const priceLens = lens('price');
// Discount only tools
const discountTools = new Pipeline()
.map(product => {
if (categoryLens.get(product) === 'tools') {
return priceLens.over(product, price => price * 0.8);
}
return product;
})
.toArray(products);
Redux-Style State Management
Lens-Based Reducers
import { lens, lensPath } from 'orlando-transducers';
const state = {
user: {
profile: { name: "Alice", email: "alice@example.com" },
preferences: { theme: "dark", notifications: true },
},
cart: { items: [], total: 0 },
};
// Define lenses for each slice of state
const nameLens = lensPath(['user', 'profile', 'name']);
const themeLens = lensPath(['user', 'preferences', 'theme']);
const cartLens = lens('cart');
const totalLens = lensPath(['cart', 'total']);
// Reducers become simple lens operations
function reducer(state, action) {
switch (action.type) {
case 'SET_NAME':
return nameLens.set(state, action.payload);
case 'TOGGLE_THEME':
return themeLens.over(state, theme =>
theme === 'dark' ? 'light' : 'dark'
);
case 'SET_TOTAL':
return totalLens.set(state, action.payload);
default:
return state;
}
}
Multiple Updates
// Chain lens operations for multiple immutable updates
const newState = themeLens.set(
nameLens.set(state, "Alicia"),
"light"
);
// Original state unchanged
console.log(state.user.profile.name); // "Alice"
console.log(newState.user.profile.name); // "Alicia"
console.log(newState.user.preferences.theme); // "light"
Deep Nested Access with lensPath
const config = {
database: {
primary: {
host: "db.example.com",
port: 5432,
credentials: {
username: "admin",
password: "secret",
},
},
},
};
const dbHostLens = lensPath(['database', 'primary', 'host']);
const dbPortLens = lensPath(['database', 'primary', 'port']);
const dbUserLens = lensPath(['database', 'primary', 'credentials', 'username']);
dbHostLens.get(config); // "db.example.com"
dbPortLens.set(config, 5433); // new config with updated port
dbUserLens.over(config, u => u.toUpperCase()); // new config with "ADMIN"
Optional Fields
import { optional } from 'orlando-transducers';
const phoneLens = optional('phone');
const bioLens = optional('bio');
const users = [
{ name: "Alice", phone: "555-0100" },
{ name: "Bob" }, // no phone
{ name: "Carol", phone: "555-0102", bio: "Developer" },
];
// Safe extraction with defaults
const pipeline = new Pipeline()
.map(user => ({
name: user.name,
phone: phoneLens.getOr(user, "N/A"),
bio: bioLens.getOr(user, "No bio provided"),
}))
.toArray(users);
// Result:
// [
// { name: "Alice", phone: "555-0100", bio: "No bio provided" },
// { name: "Bob", phone: "N/A", bio: "No bio provided" },
// { name: "Carol", phone: "555-0102", bio: "Developer" },
// ]
Prism Patterns: Sum Types
import { prism } from 'orlando-transducers';
// API response: { status: "success", data: ... } or { status: "error", message: ... }
const successPrism = prism(
resp => resp.status === 'success' ? resp.data : undefined,
data => ({ status: 'success', data })
);
const errorPrism = prism(
resp => resp.status === 'error' ? resp.message : undefined,
message => ({ status: 'error', message })
);
const responses = [
{ status: 'success', data: { id: 1 } },
{ status: 'error', message: 'Not found' },
{ status: 'success', data: { id: 2 } },
];
// Extract only successful data
const pipeline = new Pipeline()
.map(resp => successPrism.preview(resp))
.filter(data => data !== undefined)
.toArray(responses);
// Result: [{ id: 1 }, { id: 2 }]
Rust: Optics Composition
#![allow(unused)] fn main() { use orlando_transducers::optics::{Lens, Fold, Traversal}; // Compose lenses for deep access let department_lens = Lens::new( |company: &Company| company.department.clone(), |company: &Company, dept: Department| Company { department: dept, ..company.clone() }, ); let employees_traversal = Traversal::new( |dept: &Department| dept.employees.clone(), |dept: &Department, f: &dyn Fn(&Employee) -> Employee| { Department { employees: dept.employees.iter().map(f).collect(), ..dept.clone() } }, ); let name_lens = Lens::new( |emp: &Employee| emp.name.clone(), |emp: &Employee, name: String| Employee { name, ..emp.clone() }, ); // Company -> Department -> [Employee] -> name // Use traversal to update all employee names let dept = employees_traversal.over_all(&department, |emp| { name_lens.over(emp, |n| n.to_uppercase()) }); }
Iso Patterns: Unit Conversions
#![allow(unused)] fn main() { use orlando_transducers::optics::Iso; let meters_feet = Iso::new( |m: &f64| m * 3.28084, |f: &f64| f / 3.28084, ); let celsius_fahrenheit = Iso::new( |c: &f64| c * 9.0 / 5.0 + 32.0, |f: &f64| (f - 32.0) * 5.0 / 9.0, ); // Isos are reversible let feet_meters = meters_feet.reverse(); assert_eq!(feet_meters.to(&3.28084), 1.0); // Isos can be used as lenses let as_lens = celsius_fahrenheit.as_lens(); let f = as_lens.get(&100.0); // 212.0 }
Reactive State Management
Orlando's Signal and Stream types (Rust API) provide reactive primitives for state management with automatic change propagation.
Signals: Derived State
Signals represent time-varying values. When a source signal changes, all derived signals update automatically.
Temperature Converter
#![allow(unused)] fn main() { use orlando_transducers::signal::Signal; let celsius = Signal::new(0.0_f64); let fahrenheit = celsius.map(|c| c * 9.0 / 5.0 + 32.0); let kelvin = celsius.map(|c| c + 273.15); assert_eq!(*fahrenheit.get(), 32.0); assert_eq!(*kelvin.get(), 273.15); celsius.set(100.0); assert_eq!(*fahrenheit.get(), 212.0); // auto-updated assert_eq!(*kelvin.get(), 373.15); // auto-updated }
Shopping Cart
#![allow(unused)] fn main() { use orlando_transducers::signal::Signal; let items = Signal::new(vec![ ("Widget", 9.99), ("Gadget", 24.99), ]); let subtotal = items.map(|items| { items.iter().map(|(_, price)| price).sum::<f64>() }); let tax_rate = Signal::new(0.08); let total = subtotal.combine(&tax_rate, |sub, rate| { sub * (1.0 + rate) }); assert_eq!(*subtotal.get(), 34.98); // total = 34.98 * 1.08 = 37.7784 // Add an item items.update(|mut items| { items.push(("Doohickey", 14.99)); items }); // subtotal, total auto-update }
Combining Multiple Signals
#![allow(unused)] fn main() { let width = Signal::new(800_u32); let height = Signal::new(600_u32); let aspect_ratio = width.combine(&height, |w, h| { *w as f64 / *h as f64 }); let resolution = width.combine(&height, |w, h| { format!("{}x{}", w, h) }); assert_eq!(*resolution.get(), "800x600"); width.set(1920); height.set(1080); assert_eq!(*resolution.get(), "1920x1080"); }
Streams: Event Processing
Streams handle discrete events with transformation pipelines.
Click Counter
#![allow(unused)] fn main() { use orlando_transducers::signal::Signal; use orlando_transducers::stream::Stream; let clicks = Stream::new(); let counter = Signal::new(0_i32); // Bridge stream events into signal state counter.fold(&clicks, 0, |count, _: &()| count + 1); clicks.emit(()); clicks.emit(()); clicks.emit(()); assert_eq!(*counter.get(), 3); }
Event Filtering
#![allow(unused)] fn main() { use orlando_transducers::stream::Stream; let events = Stream::new(); // Only process error events let errors = events.filter(|e: &Event| e.level == Level::Error); errors.subscribe(|e| { eprintln!("ERROR: {}", e.message); }); // Only process first 100 events let limited = events.take(100); limited.subscribe(|e| { log_event(e); }); }
Stream Merging
#![allow(unused)] fn main() { use orlando_transducers::stream::Stream; let keyboard = Stream::new(); let mouse = Stream::new(); // Merge into a unified input stream let input = keyboard.merge(&mouse); input.subscribe(|event| { handle_input(event); }); keyboard.emit(InputEvent::KeyPress('a')); mouse.emit(InputEvent::Click(100, 200)); // Both arrive at the merged subscriber }
Transform Pipeline on Stream
#![allow(unused)] fn main() { let raw_messages = Stream::new(); // Build a processing pipeline on the stream let processed = raw_messages .map(|msg: String| msg.trim().to_lowercase()) .filter(|msg: &String| !msg.is_empty()); processed.subscribe(|msg| { println!("Processed: {}", msg); }); raw_messages.emit(" Hello World ".into()); // prints: Processed: hello world }
Stream-Signal Bridge: .fold()
The .fold() method is the key bridge between discrete events (Stream) and continuous state (Signal).
Running Average
#![allow(unused)] fn main() { use orlando_transducers::signal::Signal; use orlando_transducers::stream::Stream; let measurements = Stream::new(); let stats = Signal::new((0.0_f64, 0_u32)); // (sum, count) stats.fold(&measurements, (0.0, 0), |state, value: &f64| { (state.0 + value, state.1 + 1) }); let average = stats.map(|(sum, count)| { if count > 0 { sum / count as f64 } else { 0.0 } }); measurements.emit(10.0); measurements.emit(20.0); measurements.emit(30.0); assert_eq!(*average.get(), 20.0); }
State Machine
#![allow(unused)] fn main() { use orlando_transducers::signal::Signal; use orlando_transducers::stream::Stream; #[derive(Clone, Debug, PartialEq)] enum AppState { Loading, Ready, Error(String), } let actions = Stream::new(); let state = Signal::new(AppState::Loading); state.fold(&actions, AppState::Loading, |current, action: &Action| { match (current, action) { (AppState::Loading, Action::DataLoaded) => AppState::Ready, (_, Action::Error(msg)) => AppState::Error(msg.clone()), (_, Action::Reset) => AppState::Loading, (state, _) => state, } }); actions.emit(Action::DataLoaded); assert_eq!(*state.get(), AppState::Ready); }
Subscription Lifecycle
Subscriptions are automatically cleaned up when dropped:
#![allow(unused)] fn main() { let counter = Signal::new(0); { let _sub = counter.subscribe(|val| { println!("Value: {}", val); }); counter.set(1); // prints: Value: 1 counter.set(2); // prints: Value: 2 } // _sub dropped here, subscription is cleaned up counter.set(3); // no output - subscriber is gone }
For streams:
#![allow(unused)] fn main() { let events = Stream::new(); let sub = events.subscribe(|e| handle(e)); // Explicitly unsubscribe when done drop(sub); }
Geometric Algebra
Orlando provides operations on multivector coefficient arrays for geometric algebra computations. These work on plain &[f64] (Rust) or Float64Array (JavaScript), making them lightweight and integration-friendly.
Concepts
In geometric algebra, a multivector is represented as an array of coefficients, one for each basis blade. For an algebra with n dimensions, there are 2^n basis blades organized by grade:
- Grade 0: Scalar (1 blade)
- Grade 1: Vectors (
nblades) - Grade 2: Bivectors (
n choose 2blades) - Grade k: k-vectors (
n choose kblades) - Grade n: Pseudoscalar (1 blade)
JavaScript API
Grade Inspection
import init, {
bladeGrade,
bladesAtGradeCount,
gradeIndices,
gradeMask,
hasGrade,
isPureGrade,
} from 'orlando-transducers';
await init();
// What grade is blade index 3? (index 3 = e12, which is grade 2)
bladeGrade(3); // 2
// How many bivectors in 3D? (3 choose 2 = 3)
bladesAtGradeCount(3, 2); // 3
// Which indices hold grade-1 (vector) components in 3D?
gradeIndices(3, 1); // [1, 2, 4] (e1, e2, e3)
Grade Extraction and Projection
import {
gradeExtract,
gradeProject,
gradeProjectMax,
} from 'orlando-transducers';
// 3D algebra: 2^3 = 8 coefficients
// Layout: [scalar, e1, e2, e12, e3, e13, e23, e123]
const mv = new Float64Array([1, 2, 3, 4, 5, 6, 7, 8]);
// Extract just the vector (grade 1) part
const vectors = gradeExtract(3, 1, mv);
// vectors: [2, 3, 5] (coefficients of e1, e2, e3)
// Project onto grade 1 (zero out everything else)
const projected = gradeProject(3, 1, mv);
// projected: [0, 2, 3, 0, 5, 0, 0, 0]
// Project onto grades 0 and 1 (scalar + vector)
const lowGrade = gradeProjectMax(3, 1, mv);
// lowGrade: [1, 2, 3, 0, 5, 0, 0, 0]
Grade Analysis
// Which grades have non-zero components?
const mask = gradeMask(3, mv);
// mask is a bitmask: bit k set if grade k is present
// Check specific grade
hasGrade(3, 2, mv); // true (has bivector components)
// Is this a pure-grade multivector?
isPureGrade(3, mv); // false (multiple grades present)
const pureVector = new Float64Array([0, 1, 0, 0, 0, 0, 0, 0]);
isPureGrade(3, pureVector); // true (only grade 1)
Component Access
import { componentGet, componentSet } from 'orlando-transducers';
const mv = new Float64Array([0, 0, 0, 0, 0, 0, 0, 0]);
// Set the e1 component (index 1)
const updated = componentSet(mv, 1, 3.14);
// updated: [0, 3.14, 0, 0, 0, 0, 0, 0]
// Get the e1 component
componentGet(updated, 1); // 3.14
Norms
import { mvNorm, mvNormSquared, mvNormalize } from 'orlando-transducers';
const v = new Float64Array([0, 3, 4, 0, 0, 0, 0, 0]);
mvNormSquared(v); // 25 (3^2 + 4^2)
mvNorm(v); // 5
const unit = mvNormalize(v);
// unit: [0, 0.6, 0.8, 0, 0, 0, 0, 0]
mvNorm(unit); // 1.0
Algebraic Transformations
import { mvReverse, gradeInvolution } from 'orlando-transducers';
const mv = new Float64Array([1, 2, 3, 4, 5, 6, 7, 8]);
// Reversion: sign flip depends on grade
// grade k gets factor (-1)^(k(k-1)/2)
const reversed = mvReverse(3, mv);
// Grade involution: negate odd grades
const involuted = gradeInvolution(3, mv);
Rust API
All operations work on &[f64] coefficient slices:
#![allow(unused)] fn main() { use orlando_transducers::geometric_optics::*; // Grade of basis blade at index 5 (= e13 in 3D, grade 2) assert_eq!(blade_grade(5), 2); // Extract vector components from a multivector let mv = vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0]; let vectors = grade_extract(3, 1, &mv); assert_eq!(vectors, vec![2.0, 3.0, 5.0]); // Project onto a single grade let projected = grade_project(3, 2, &mv); // Only bivector components survive // Normalize let v = vec![0.0, 3.0, 4.0, 0.0, 0.0, 0.0, 0.0, 0.0]; let unit = normalize(&v); assert!((norm(&unit) - 1.0).abs() < 1e-10); }
Using with Transducer Pipelines
#![allow(unused)] fn main() { use orlando_transducers::iter_ext::PipelineBuilder; use orlando_transducers::geometric_optics::*; // Process a stream of multivectors: normalize, then extract vector parts let multivectors: Vec<Vec<f64>> = get_multivectors(); let unit_vectors: Vec<Vec<f64>> = PipelineBuilder::new() .map(|mv: Vec<f64>| normalize(&mv)) .filter(|mv: &Vec<f64>| is_pure_grade(3, mv)) .map(|mv: Vec<f64>| grade_extract(3, 1, &mv)) .run(multivectors.into_iter()); }
API Reference
| Function | Description |
|---|---|
bladeGrade(index) | Grade of a basis blade (popcount of index) |
bladesAtGradeCount(dim, grade) | Number of blades at a grade (binomial coefficient) |
gradeIndices(dim, grade) | Coefficient indices for a grade |
gradeExtract(dim, grade, mv) | Extract coefficients at a grade |
gradeProject(dim, grade, mv) | Zero out all other grades |
gradeProjectMax(dim, maxGrade, mv) | Keep grades up to max |
gradeMask(dim, mv) | Bitmask of present grades |
hasGrade(dim, grade, mv) | Check for non-zero grade |
isPureGrade(dim, mv) | Check single-grade multivector |
componentGet(mv, index) | Get single coefficient |
componentSet(mv, index, value) | Set single coefficient |
mvNorm(mv) | Euclidean norm |
mvNormSquared(mv) | Squared norm |
mvNormalize(mv) | Normalize to unit length |
mvReverse(dim, mv) | Reversion |
gradeInvolution(dim, mv) | Grade involution |