Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Welcome & Goals

viva-genicam provides pure Rust building blocks for the GenICam ecosystem supporting GigE Vision and USB3 Vision, with first-class support for Windows, Linux, and macOS.

Who is this book for?

  • End-users building camera applications who want a practical high-level API and copy-pasteable examples.
  • Contributors extending transports, GenApi features, and streaming – who need a clear mental model of crates and internal boundaries.

What works today

  • GigE Vision: GVCP discovery, GVSP streaming with resend and reassembly, events, action commands, chunk parsing, FORCEIP, persistent IP configuration.
  • USB3 Vision: device discovery, GenCP register I/O, bulk-endpoint streaming, async frame iterator.
  • GenApi: NodeMap with all standard node types (Integer, Float, Enum, Boolean, Command, Category, String, SwissKnife, Converter), pValue delegation, selectors, node metadata and visibility filtering.
  • CLI (viva-camctl): discovery, feature get/set, streaming, events, chunks, benchmarks, IP configuration.
  • Service bridge: expose cameras over Zenoh for genicam-studio.

The protocol implementations follow the published EMVA specifications and are validated against built-in fake camera simulators (190+ automated tests). Testing against physical cameras from different manufacturers is ongoing – bug reports and compatibility feedback are welcome.

How this book is organized

  • Start with Quick Start to build, test, and run the first discovery.
  • Read the Primer and Architecture to get the big picture.
  • Use Crate Guides and Tutorials for hands-on tasks.
  • See Networking and Troubleshooting when packets don’t behave.

Quick Start

This guide gets you from checkout to discovering cameras in minutes.

Prerequisites

  • Rust: MSRV 1.75+ (toolchain pinned via rust-toolchain.toml).
  • OS: Windows, Linux, or macOS.
  • Network (GigE Vision):
    • Allow UDP broadcast on the NIC you’ll use for discovery.
    • Optional: enable jumbo frames on that NIC for high‑throughput streaming tests.

Build & Test

# From the repo root:
cargo build --workspace

# Run all tests
cargo test --workspace

# Generate local API docs (rustdoc)
cargo doc --workspace --no-deps

First run: Discovery examples

You can try discovery in two ways—either via the high‑level viva-genicam crate example or the viva-camctl CLI.

Option A: Example (genicam crate)

# List cameras via GVCP broadcast\ n cargo run -p viva-genicam --example list_cameras

Option B: CLI (viva-camctl)

# Discover cameras on the selected interface (IPv4 of your NIC)
cargo run -p viva-camctl -- list --iface 192.168.0.5

Control path: read / write & XML

# Read a feature by name
cargo run -p viva-camctl -- get --ip 192.168.0.10 --name ExposureTime

# Set a feature value
cargo run -p viva-camctl -- set --ip 192.168.0.10 --name ExposureTime --value 5000

# Fetch minimal XML metadata via control path (example)
cargo run -p viva-genicam --example get_set_feature

Streaming (early GVSP)

# Receive a GVSP stream, auto‑negotiate packet size, save first two frames
cargo run -p viva-camctl -- stream --ip 192.168.0.10 --iface 192.168.0.5 --auto --save 2

Windows specifics

  • Run the terminal as Administrator the first time to let the firewall prompt appear.
  • Add inbound UDP rules for discovery and streaming.
  • Enable jumbo frames per NIC if your network supports it (helps at high FPS).

Next steps

  • Read the Primer for the concepts behind discovery, control, and streaming.
  • Jump to the Tutorial: Discover devices for a step‑by‑step walkthrough with troubleshooting tips.

GenICam & Vision Standards Primer

This chapter orients you in the standards and shows how they map to the crates in this repo. If you’re an end‑user, skim the concepts and jump to tutorials. If you’re a contributor, the mappings help you navigate the code.

1) Control vs. Data paths (big picture)

  • Control: configure the device, read status, fetch the GenApi XML. In GigE Vision, control is GVCP (GigE Vision Control Protocol, UDP) carrying GenCP (Generic Control Protocol) semantics for register reads/writes and feature access.
  • Data: receive image/metadata stream(s). In GigE Vision, data is GVSP (GigE Vision Streaming Protocol, UDP), typically one-way from camera → host.
  • Events & Actions: GVCP supports device→host events and host→device action commands for sync/triggering.
   +------------------------+        +--------------------+
   |        Host            |        |     Camera         |
   |  (this repository)     |        |  (GigE Vision)     |
   +-----------+------------+        +----------+---------+
               | GVCP (UDP, control)           |
               |  GenCP (registers/features)   |
               v                                ^
       Configure, query, XML                    |
                                                |
               ^                                v
               | GVSP (UDP, data)         Image/Chunks
               | (streaming)                    |

2) GenApi XML & NodeMap

  • The device exposes an XML description of its features (nodes). Nodes form a graph with types like Integer, Float, Boolean, Enumeration, Command, String, Register, and expression nodes like SwissKnife.
  • Nodes have AccessMode (RO/RW), Visibility (Beginner/Expert/Guru), Units, Min/Max/Inc, Selector links, and Dependencies (i.e., a node’s value depends on other nodes).
  • The host builds a NodeMap from the XML and evaluates nodes on demand: some read/write device registers; others compute values from expressions.

SwissKnife (implemented)

  • A SwissKnife node computes its value from an expression referencing other nodes (e.g., arithmetic, logic, conditionals). Typical uses:
    • Derive human‑readable features from raw register fields.
    • Apply scale/offset and conditionals depending on selectors.
  • In this project, SwissKnife is evaluated in the NodeMap, so reads of dependent nodes trigger the calculation transparently.

Selectors

  • Selectors (e.g., GainSelector) change the addressing or active branch so the same feature name maps to different underlying registers or computed paths.

3) Streaming: GVSP

  • UDP packets carry payloads (image data/metadata). The host reassembles frames, handles resend requests, negotiates packet size/MTU, and may introduce packet delay to avoid NIC/driver overflow.
  • Chunks: optional metadata blocks (e.g., Timestamp, ExposureTime) can be enabled and parsed alongside image data.
  • Time mapping: devices often use tick counters; the host maintains a mapping between device ticks and host time for cross‑correlation.

4) How standards map to crates

ConceptCrateResponsibility
GenCP (encode/decode, status)viva-gencpMessage formats, errors, helpers for control-path operations
GVCP/GVSP (GigE Vision)viva-gigeDiscovery, control channel, streaming engine, resend/MTU/delay, events/actions
GenApi XML loaderviva-genapi-xmlFetch XML via control path and parse schema‑lite into an internal representation
NodeMap & evaluationviva-genapiNode types (incl. SwissKnife), dependency resolution, selector routing, value get/set
Public façadeviva-genicamEnd‑user API combining transport + NodeMap + utilities (examples live here)

5) USB3 Vision (preview)

  • Similar split between control and data paths, but with USB3 transport and different discovery/endpoint mechanics. The higher‑level GenApi and NodeMap concepts remain the same.
  • Architecture Overview for a code‑level view of modules, traits, and async/concurrency.
  • Crate Guides for deep dives (APIs, examples, edge cases).
  • Tutorials to configure features and receive frames end‑to‑end.

Architecture Overview

This section maps the runtime flow, crate boundaries, and key traits so both app developers and contributors can reason about the system.

Layered view

+---------------------------+   End‑user API & examples
|   genicam (façade)        |   - device discovery, feature get/set
|   crates/viva-genicam/examples |   - streaming helpers, CLI wiring
+-------------+-------------+
|
v
+---------------------------+   GenApi core
|   viva-genapi             |   - Node types (Integer/Float/Enum/Bool/Command,
|                           |     Register, String, **SwissKnife**)
|                           |   - NodeMap build & evaluation
|                           |   - Selector routing & dependency graph
+-------------+-------------+
|
v
+---------------------------+   GenApi XML
|   viva-genapi-xml         |   - Fetch XML via control path
|                           |   - Parse schema‑lite → IR used by viva-genapi
+-------------+-------------+
|
v
+---------------------------+   Transports
|   viva-gige               |   - GVCP (control): discovery, read/write, events,
|                           |     action commands
|                           |   - GVSP (data): receive, reassembly, resend,
|                           |     MTU/packet size negotiation, delay, stats
+-------------+-------------+
|
v
+---------------------------+   Protocol helpers
|   viva-gencp              |   - GenCP encode/decode, status codes, helpers
+---------------------------+

Data flow

  1. Discovery (viva-gige): bind to NIC → broadcast GVCP discovery → parse replies.
  2. Connect: establish control channel (UDP) and prepare stream endpoints if needed.
  3. GenApi XML (viva-genapi-xml): read address from device registers → fetch XML → parse to IR.
  4. NodeMap (viva-genapi): build nodes, resolve links (Includes, Pointers, Selectors), set defaults.
  5. Evaluation (viva-genapi):
    • Direct nodes read/write underlying registers via viva-gige + viva-gencp.
    • Computed nodes (e.g., SwissKnife) evaluate expressions that reference other nodes.
  6. Streaming (viva-gige): configure packet size/delay → receive GVSP → reassemble → expose frames + chunks and timestamps.

Async, threading, and I/O

  • Transport uses async UDP sockets (Tokio) and bounded channels for back‑pressure.
  • Frame reassembly runs on dedicated tasks; statistics are aggregated periodically.
  • Node evaluation is sync from the caller’s perspective; I/O hops are awaited within accessors.

Error handling & tracing

  • Errors are categorized by layer (transport/protocol/genapi/eval). Use anyhow/custom error types at boundaries.
  • Enable logs with RUST_LOG=info (or debug,trace) and consider JSON output for tooling.

Platform considerations

  • Windows/Linux/macOS supported. On Windows, run discovery once as admin to authorize firewall; consider jumbo frames per NIC for high FPS.
  • Multi‑NIC hosts should explicitly select the interface for discovery/streaming.

Extending the system

  • Add nodes in viva-genapi by implementing the evaluation trait and wiring dependencies.
  • Add transports as new viva-* crates behind a trait the facade can select at runtime.
  • Keep viva-genicam thin: compose transport + NodeMap + utilities; keep heavy logic in lower crates.

Crates overview

The viva-genicam workspace is split into small crates that mirror the structure of the GenICam ecosystem:

  • Protocols & transport (GenCP, GVCP/GVSP)
  • GenApi XML loading & evaluation
  • Public “facade” API for applications
  • Command-line tooling for everyday camera work

This chapter is the “map of the territory”. It tells you which crate to use for a given task, and where to look if you want to hack on internals.


Quick map

CratePathRole / responsibilityPrimary audience
viva-gencpcrates/viva-gencpGenCP encode/decode + helpers for control path over GVCPContributors, protocol nerds
viva-gigecrates/viva-gigeGigE Vision transport: GVCP (control) + GVSP (streaming)End-users & contributors
viva-genapi-xmlcrates/genapi-xmlLoad GenICam XML from device / disk, parse into IRContributors (XML / SFNC work)
viva-genapicrates/viva-genapiNodeMap implementation, feature access, SwissKnife, selectorsEnd-users & contributors
viva-genicamcrates/genicamHigh-level “one crate” façade combining transport + GenApiEnd-users
viva-camctlcrates/viva-camctlCLI tool for discovery, configuration, streaming, benchmarksEnd-users, ops, CI scripts

If you just want to use a camera from Rust, you’ll usually start with viva-genicam (or viva-camctl from the command line) and ignore the lower layers.


How the crates fit together

At a high level, the crates compose like this:

           ┌───────────────┐      ┌────────────────┐
           │   viva-gencp      │      │   viva-genapi  │
           │ GenCP encode  │      │ NodeMap,       │
           │ / decode      │      │ SwissKnife,    │
           └─────┬─────────┘      │ selectors      │
                 │                └──────┬─────────┘
                 │                       │
           ┌─────▼─────────┐      ┌──────▼─────────┐
           │   viva-gige     │      │  viva-genapi-xml    │
           │ GVCP / GVSP   │      │ XML loading &  │
           │ packet I/O    │      │ schema-lite IR │
           └─────┬─────────┘      └──────┬─────────┘
                 │                       │
                 └──────────┬────────────┘
                            │
                      ┌─────▼─────┐
                      │  genicam  │  ← public Rust API
                      └─────┬─────┘
                            │
                      ┌─────▼───────┐
                      │ viva-camctl   │  ← CLI on top of `viva-genicam`
                      └─────────────┘

Roughly:

  • viva-gige knows how to talk UDP to a GigE Vision device (discovery, register access, image packets, resends, stats, …).
  • viva-gencp provides the GenCP building blocks used on the control path.
  • viva-genapi-xml fetches and parses the GenApi XML that describes the device’s features.
  • viva-genapi turns that XML into a NodeMap you can read/write, including SwissKnife expressions and selector-dependent features.
  • viva-genicam stitches all of the above into a reasonably ergonomic API.
  • viva-camctl exposes common workflows from genicam as cargo run -p viva-camctl -- ….

When to use which crate

I just want to use my camera from Rust

Use viva-genicam.

Typical tasks:

  • Enumerate cameras on a NIC
  • Open a device, read/write features by name
  • Start a GVSP stream, iterate over frames, look at stats
  • Subscribe to events or send action commands

Start with the examples under crates/viva-genicam/examples/ and the Tutorials.

I want a command-line tool for daily work

Use viva-camctl.

Typical tasks:

  • Discovery: list all cameras on a given interface
  • Register/feature inspection and configuration
  • Quick streaming tests and stress benchmarks
  • Enabling/disabling chunk data, configuring events

This is also a good reference for how to structure a “real” application on top of genicam.

I need to touch GigE Vision packets / low-level transport

Use viva-gige (and viva-gencp as needed).

Example reasons:

  • You want to experiment with MTU, packet delay, resend logic, or custom stats
  • You’re debugging interoperability with a weird device and need raw GVCP/GVSP
  • You want to build a non-GenApi tool that only tweaks vendor-specific registers

The viva-gige chapter goes into more detail on discovery, streaming, events, actions, and tuning.

I want to work on GenApi / XML internals

Use viva-genapi-xml and viva-genapi.

Typical contributor activities:

  • Supporting new SFNC features or vendor extensions
  • Improving SwissKnife coverage or selector handling
  • Adding tests for tricky XML from specific camera families

The following chapters are relevant:

If you’re not sure where a GenApi bug lives, the rule of thumb is:

  • “XML can’t be parsed” → genapi-xml
  • “Feature exists but behaves wrong” → viva-genapi
  • “Device returns odd data / status codes” → viva-gige or viva-gencp

I need a single high-level entry point

Use viva-genicam.

This crate aims to expose just enough control/streaming surface for most applications without making you think about transports, XML, or NodeMap internals.

The genicam crate chapter shows:

  • How to go from “no camera” to “frames in memory” in ~20 lines
  • How to query and set features safely (with proper types)
  • How to plug in your own logging, error handling, and runtime

Crate deep dives

The rest of this section of the book contains crate-specific chapters:

If you’re reading this for the first time, a good path is:

  1. Skim this page.
  2. Read the genicam chapter.
  3. Jump to viva-gige or viva-genapi when you hit something you want to tweak.

viva-gencp

viva-gige

genapi-xml

viva-genapi

genicam (façade)

Future crates & placeholders

Tutorials

This section walks you through typical workflows step by step.

The focus is a GigE Vision camera accessed over Ethernet, using:

  • The viva-camctl CLI for quick experiments and ops work.
  • The viva-genicam crate for Rust examples you can copy into your own code.

If you haven’t done so yet, first read:

They explain how to build the workspace and verify that your toolchain works.


If you are new to the project, the recommended reading order is:

  1. Discovery
    Find cameras on your network, verify that discovery works, and understand basic NIC and firewall requirements.

  2. Registers & features
    Read and write GenApi features (e.g. ExposureTime), understand selectors such as GainSelector, and learn when you might need raw registers.

  3. GenApi XML
    Fetch the GenICam XML from a device, inspect it, and see how it maps to the NodeMap used by viva-genapi.

  4. Streaming
    Start a GVSP stream, receive frames, look at stats, and learn which knobs matter for throughput and robustness.

You can stop after Discovery and Streaming if you only need to verify that your camera works. The other tutorials are useful when you want to build a full application or debug deeper GenApi issues.


What you need before starting

Before running any tutorial, make sure you have:

  • A working Rust toolchain (see rust-toolchain.toml for the pinned version).

  • The workspace builds successfully:

    cargo build --workspace
    
      •	At least one GigE Vision camera reachable from your machine:
      •	Either directly connected to a NIC.
      •	Or via a switch on a dedicated subnet.
    
    

For networking details (MTU, jumbo frames, Windows specifics, etc.), see Networking once that chapter is filled in.

Tutorials overview • Discovery Use viva-camctl and the genicam examples to find cameras and verify that basic communication is working. • Registers & features Use features by name, work with selectors, and know when to fall back to raw register access. • GenApi XML Fetch XML from the device, inspect it, and understand how genapi-xml and viva-genapi use it. • Streaming Start streaming, tune packet size and delay, and interpret statistics and logging output.

Each tutorial has: • A CLI variant using viva-camctl. • A Rust variant using the viva-genicam crate and its examples.


Discovery

Goal of this tutorial:

  • Verify that your host can see your GigE Vision camera.
  • Learn how to run discovery from:
    • The viva-camctl CLI.
    • The viva-genicam Rust examples.
  • Understand the most common issues (NIC selection, firewall, subnets).

If discovery does not work, the other tutorials will not help much — fix this first.


Before you begin

Make sure that:

  • The workspace builds:
cargo build --workspace
  • Your camera and host are physically connected:
  • Direct cable: host NIC ↔ camera.
  • Or via a switch dedicated to the camera subnet.
  • The camera has a valid IPv4 address:
  • From DHCP on your camera network, or
  • A static address that matches the host NIC’s subnet.

For deeper network discussion (jumbo frames, tuning, etc.), see Networking once that chapter is filled in.

Step 1 – Discover with viva-camctl

The easiest way to test discovery is the viva-camctl CLI, which wraps the genicam crate.

1.1. Basic discovery

Run:

cargo run -p viva-camctl -- list

What to expect:

  • On success, you get a table or list of devices with at least:
  • IP address
  • MAC address
  • Model / manufacturer (if reported)
  • If nothing appears:
  • Check that the camera is powered and connected.
  • Check that your NIC is on the same subnet as the camera.
  • Check that your host firewall allows UDP broadcast on that NIC.

1.2. Selecting an interface explicitly

On multi-NIC systems, viva-camctl may need to be told which interface to use.

Run:

cargo run -p viva-camctl -- list --iface 192.168.0.5

Where 192.168.0.5 is the IPv4 address of your host NIC on the camera network.

If you are not sure which NIC to use:

  • On Linux/macOS: use ip addr / ifconfig to inspect addresses.
  • On Windows: use ipconfig and your network settings GUI.

If discovery works when --iface is specified but not without it, your machine likely has multiple active interfaces and the automatic NIC choice is not what you expect.

Step 2 – Discover via the genicam examples

The genicam crate comes with examples that exercise the same discovery logic from Rust code.

Run:

cargo run -p viva-genicam --example list_cameras

This example:

  • Broadcasts on your camera network.
  • Prints basic info about each device it finds.

Use this when you want to:

  • See how to embed discovery into your own Rust application.
  • Compare behaviour between the CLI and the library (they should match).

The code for list_cameras lives under crates/viva-genicam/examples/ and is a good starting point for your own experiments.

Step 3 – Interpreting results

When discovery succeeds, you should record:

  • The camera’s IP address (e.g. 192.168.0.10).
  • Which host NIC / interface you used (e.g. 192.168.0.5).

You will reuse these values in later tutorials, e.g.:

  • Registers & features: –ip 192.168.0.10
  • Streaming: –ip 192.168.0.10 –iface 192.168.0.5

If you see multiple devices, you may want to label them (physically or in a note) to avoid confusion later.

Troubleshooting checklist

If viva-camctl – list or list_cameras find no devices:

  1. Physical link
    • Is the link LED on the NIC / switch / camera lit?
    • Try a different Ethernet cable or port.
  2. Subnets
    • Host NIC and camera must be on the same subnet (e.g. both 192.168.0.x/24).
    • Avoid having two NICs on the same subnet; this can confuse routing.
  3. Firewall
    • Allow UDP broadcast on the camera NIC.
    • On Windows, make sure the executable is allowed for both “Private” and “Public” networks or run inside a network profile that permits broadcast.
  4. Multiple NICs
    • Use –iface to force the correct interface.
    • Temporarily disable other NICs to confirm the problem is NIC selection.
  5. Vendor tools
    • If the vendor’s viewer can see the camera but viva-camctl cannot:
    • Compare which NIC / IP the vendor tool uses.
    • Check whether the vendor tool reconfigured the camera’s IP (e.g. via DHCP or “force IP” features).

If discovery is still failing after this checklist, capture logs with:

RUST_LOG=debug cargo run -p viva-camctl -- list --iface <host-ip>

and open an issue with the log output and a short description of your setup. This will also be useful when extending the GigE transport.

Registers & features

Goal of this tutorial:

  • Read and write GenApi features such as ExposureTime or Gain.
  • Understand how features map to the underlying registers.
  • Learn the basics of selectors (e.g. GainSelector) and how they affect values.
  • See how to do the same thing from:
    • The viva-camctl CLI.
    • The viva-genicam Rust examples.

If you haven’t done so yet, first go through:

so you know the IP of your camera and which host interface you’re using.


Concepts: features vs registers

GenICam exposes camera configuration through features described in the GenApi XML:

  • A feature has a name (ExposureTime, Gain, PixelFormat, …).
  • Each feature has a type:
    • Integer / Float / Boolean / Enumeration / Command, …
  • Under the hood, a feature usually corresponds to one or more registers:
    • A simple feature may read/write a single 32-bit register.
    • More complex ones may be derived via SwissKnife expressions or depend on selectors.

The viva-genapi crate:

  • Loads the XML (via viva-genapi-xml).
  • Builds a NodeMap.
  • Lets you read and write features by name using typed accessors.

The viva-genicam crate and viva-camctl CLI sit on top of this NodeMap and try to hide most of the low-level details.


Step 1 – Inspect features with viva-camctl

The viva-camctl CLI exposes basic feature access via get and set subcommands. oai_citation:0‡GitHub

You need:

  • The camera IP (from the discovery tutorial).
  • Optionally, the host interface IP if you have multiple NICs.

1.1. Read a feature by name

Example: read ExposureTime from a camera at 192.168.0.10:

cargo run -p viva-camctl -- \
  get --ip 192.168.0.10 --name ExposureTime

You should see:

  • The current value.
  • The type (e.g. Float or Integer).
  • Possibly range information (min/max/increment) if available.

If you prefer machine-readable output, add –json:

cargo run -p viva-camctl -- \
  get --ip 192.168.0.10 --name ExposureTime --json

This is handy for scripting and CI.

1.2. Write a feature by name

To change a value, use the set subcommand. For example, set exposure to 5000 microseconds:

cargo run -p viva-camctl -- \
  set --ip 192.168.0.10 --name ExposureTime --value 5000

Then verify:

cargo run -p viva-camctl -- \
  get --ip 192.168.0.10 --name ExposureTime

If the value doesn’t change:

  • The feature may be read-only (depending on acquisition state).
  • There may be constraints (e.g. limited range, alignment).
  • Another feature (like ExposureAuto) may be overriding manual control.

Those cases are described in more depth in the viva-genapi chapter.

Step 2 – Work with selectors

Many cameras use selectors to multiplex multiple logical settings onto the same underlying registers. A common pattern is:

  • GainSelector = All, Red, Green, Blue, …
  • Gain = value for the currently selected channel.

When you change GainSelector, you are effectively changing which “row” you are editing. The NodeMap takes care of switching the right registers.

2.1. Inspect which selectors exist

You can use viva-camctl to dump a selector feature and see its possible values. For example, to inspect GainSelector:

cargo run -p viva-camctl -- \
  get --ip 192.168.0.10 --name GainSelector --json

Look for:

  • The current value (e.g. “All”).
  • The list of allowed values / enum entries.

2.2. Change a feature through a selector

To set different gains for different channels, a typical sequence is:

# Select the red channel, then set Gain
cargo run -p viva-camctl -- \
  set --ip 192.168.0.10 --name GainSelector --value Red

cargo run -p viva-camctl -- \
  set --ip 192.168.0.10 --name Gain --value 5.0

# Select the blue channel, then set Gain
cargo run -p viva-camctl -- \
  set --ip 192.168.0.10 --name GainSelector --value Blue

cargo run -p viva-camctl -- \
  set --ip 192.168.0.10 --name Gain --value 3.0

From your perspective, you are just changing features. Internally, viva-genapi:

  • Evaluates the selector.
  • Resolves which nodes and registers are active.
  • Applies any SwissKnife expressions as needed.

The selectors_demoexample in the viva-genicam crate shows this pattern in Rust. 

Step 3 – Do the same from Rust (genicam examples)

The genicam crate provides examples that mirror the CLI operations. 

3.1. Basic get/set example

Run the get_set_feature example:

cargo run -p viva-genicam --example get_set_feature

This example demonstrates:

  • Opening a camera (e.g. by IP or by index).
  • Getting a feature by name.
  • Printing its value and metadata.
  • Setting a new value and verifying it.

Inspect the source under crates/viva-genicam/examples/get_set_feature.rs for a minimal template you can reuse in your own project.

Typical pseudo-flow inside that example (simplified):

#![allow(unused)]
fn main() {
// Pseudocode sketch — see the actual example for details
let mut ctx = genicam::Context::new()?;
let cam = ctx.open_by_ip("192.168.0.10".parse()?)?;
let mut nodemap = cam.nodemap()?;

// Read a float feature
let exposure: f64 = nodemap.get_float("ExposureTime")?;
println!("ExposureTime = {} us", exposure);

// Write a new value
nodemap.set_float("ExposureTime", 5000.0)?;
}

Types and method names may differ slightly; always follow the real example in the repository for exact signatures.

3.2. Selectors demo

To see selector logic in code, run:

cargo run -p viva-genicam --example selectors_demo

This example walks through:

  • Enumerating selector values.
  • Looping over them to set/read the associated feature.
  • Printing out the effective values per selector.

This is a good reference if you need to build a UI that exposes per-channel settings (e.g. separate gains per color channel).

Step 4 – When you might need raw register access

Most applications should prefer feature-by-name access via GenApi:

  • You get type safety (integers vs floats vs enums).
  • You respect vendor constraints and SFNC behaviour.
  • Your code is more portable across cameras.

However, there are cases where raw registers are still useful:

  • Debugging unusual vendor behaviour or firmware bugs.
  • Working with undocumented features that are not in the XML.
  • Bringing up very early prototypes where the GenApi XML is incomplete.

The lower-level crates (viva-gige and viva-gencp) expose primitives for reading and writing device memory directly. Refer to:

for details and examples. Be careful: writing to arbitrary registers can easily put the device into an unusable state until power-cycled.

Recap

After this tutorial you should be able to:

  • Read and write GenApi features by name using viva-camctl.
  • Understand and use selector features (e.g. GainSelector → Gain).
  • Locate and run the genicam examples (get_set_feature, selectors_demo) as templates for your own applications.
  • Know that raw register access exists, but is usually a last resort.

Next step: GenApi XML— how the XML is fetched and turned into the NodeMap that backs these features.


GenApi XML

Goal of this tutorial:

  • Understand what the GenICam XML is and where it lives.
  • See how viva-genapi-xml:
    • Fetches the XML from the device (via the FirstURL register).
    • Parses it into a lightweight internal representation.
  • Learn how to call this from Rust using a simple memory reader closure.
  • Know when you actually need to look at the XML (and when you don’t).

You should already have:

  • Completed Discovery.
  • Completed Registers & features, or at least be comfortable with the idea of features (ExposureTime, Gain, etc.) backed by registers.

1. What is the GenICam XML?

Every GenICam-compliant device exposes a self-description XML file:

  • It lists all features the device supports (name, type, access mode, range).
  • It defines how those features map to device registers.
  • It encodes categories, selectors, and SwissKnife expressions.
  • It declares which version of the GenApi schema the file uses.

This XML is normally stored in the device’s non-volatile memory. On the control path, the host:

  1. Reads the FirstURL register at a well-known address (0x0000).
  2. Interprets it as a URL that tells where the XML actually lives:
    • Often a “local” memory address + size.
    • In theory, it could be http://… or file://… as well.
  3. Reads the XML bytes from that location.
  4. Hands the XML string to a GenApi implementation (here: viva-genapi).

The viva-genapi-xml crate encapsulates steps 1–3:

  • Discover where to read XML from.
  • Read it over the existing memory read primitive.
  • Parse it into a simple, Rust-friendly model that the rest of the stack uses.

2. Overview of viva-genapi-xml

At a high level, viva-genapi-xml provides three building blocks:

  • A function that fetches the XML from the device using a memory reader:
#![allow(unused)]
fn main() {
// Rough shape / pseudocode
pub async fn fetch_and_load_xml<F, Fut>(
    read_mem: F,
) -> Result<String, XmlError>
}

where

#![allow(unused)]
fn main() {
F: FnMut(u64, usize) -> Fut,
Fut: Future<Output = Result<Vec<u8>, XmlError>>;
}
  • A function that parses XML into minimal metadata (schema version, top-level features) without understanding every node type:
#![allow(unused)]
fn main() {
pub fn parse_into_minimal_nodes(xml: &str) -> Result<MinimalXmlInfo, XmlError>;
}
  • A function that parses XML into a full XmlModel consisting of a flat list of node declarations (Integer, Float, Enum, Boolean, Command, Category, SwissKnife, …), including addressing and selector metadata.

You normally will not call these directly in application code (the genicam crate does this for you), but they are useful when:

  • Debugging why a particular feature behaves a certain way.
  • Inspecting how a vendor encoded selectors or SwissKnife expressions.
  • Adding support for new node types or schema variations in viva-genapi.

3. Fetching XML from a device in Rust

This section shows how you could call genapi-xml directly. The exact types in your code will differ depending on whether you start from genicam or viva-gige, but the pattern is always the same:

  1. Open a device.
  2. Provide a read_mem(addr, len) async function/closure.
  3. Call fetch_and_load_xml(read_mem).

3.1. Memory reader closure concept

fetch_and_load_xml does not know about GVCP, sockets, or cameras. It only knows how to call a function with this shape:

#![allow(unused)]
fn main() {
async fn read_mem(address: u64, length: usize) -> Result<Vec<u8>, XmlError>;
}

Internally it will:

  • Read up to a small buffer (e.g. 512 bytes) at address 0x0000.
  • Interpret that buffer as a C string containing the FirstURL.
  • Parse the URL and decide where to read the XML from.
  • Read that region into memory and return it as a String.

Your job is to plug in a closure that uses whatever transport you have:

  • A genicam device method (e.g. device.read_memory(address, length)).
  • A low-level viva-gige control primitive.

3.2. Example: fetch XML using a genicam-style device

Below is illustrative pseudocode. Use it as a template and adapt to the actual types in your project.

#![allow(unused)]
fn main() {
use viva_genapi_xml::{fetch_and_load_xml, XmlError};
use std::future::Future;

async fn fetch_xml_for_device() -> Result<String, XmlError> {
    // 1. Open your device using the higher-level API.
    //    Exact API varies; adjust to your real `viva-genicam` / `viva-gige` types.
    let mut ctx = genicam::Context::new().map_err(|e| XmlError::Transport(e.to_string()))?;
    let mut dev = ctx
        .open_by_ip("192.168.0.10".parse().unwrap())
        .map_err(|e| XmlError::Transport(e.to_string()))?;

    // 2. Define a memory reader closure.
    //    It must accept (address, length) and return bytes.
    let mut read_mem = move |addr: u64, len: usize| {
        async {
            // Replace `read_memory` with the actual method you have.
            let bytes = dev
                .read_memory(addr, len)
                .await
                .map_err(|e| XmlError::Transport(e.to_string()))?;
            Ok(bytes)
        }
    };

    // 3. Ask `viva-genapi-xml` to follow FirstURL and return the XML document.
    let xml = fetch_and_load_xml(&mut read_mem).await?;
    Ok(xml)
}
}

Key points:

  • The closure is async and can perform chunked transfers internally.
  • XmlError::Transport is used to wrap any transport-level errors.
  • HTTP / file URLs are currently treated as Unsupported in XmlError; the typical GigE Vision case uses a local memory address.

4. Inspecting minimal XML metadata

Once you have the XML string, you can parse just enough to answer questions like:

  • “Which GenApi schema version does this camera use?”
  • “What are the top-level categories / features?”
  • “Does this XML look obviously broken?”

viva-genapi-xml exposes a lightweight parse function for that:

#![allow(unused)]
fn main() {
use viva_genapi_xml::{parse_into_minimal_nodes, XmlError};

fn inspect_xml(xml: &str) -> Result<(), XmlError> {
    let info = parse_into_minimal_nodes(xml)?;

    if let Some(schema) = &info.schema_version {
        println!("GenApi schema version: {schema}");
    } else {
        println!("GenApi schema version: (not found)");
    }

    println!("Top-level features / categories:");
    for name in &info.top_level_features {
        println!("  - {name}");
    }

    Ok(())
}
}

This is intentionally lossy: it does not understand every node type. Its job is to be:

  • Fast enough for quick sanity checks.
  • Robust to schema extensions that are not yet implemented.

Use this when you just need to confirm that:

  • The XML is parseable at all.
  • It roughly matches expectations for your camera family.

5. From XML to a full NodeMap

The next step (handled elsewhere in the stack) is:

  1. Parse XML into an XmlModel: a flat list of NodeDecl entries that carry:
    • Feature name and type (Integer/Float/Enum/Bool/Command/Category/SwissKnife).
    • Addressing information (fixed / selector-based / indirect).
    • Access mode (RO/WO/RW).
    • Bitfield and byte-order information.
    • Selector relationships and SwissKnife expressions.
  2. Feed this XmlModel into viva-genapi, which:
    • Instantiates a NodeMap.
    • Resolves feature dependencies, selectors, and expressions at runtime.
    • Exposes typed getters/setters like get_float(“ExposureTime”).

You do not need to perform this plumbing manually in a typical application:

  • The genicam crate will fetch and parse XML as part of its device setup.
  • The viva-camctl CLI uses that same pipeline when you call get / set on features.

If you want the gory details, see: - GenApi XML loader: genapi-xml - GenApi core & NodeMap: viva-genapi

(these chapters go into internal structures and how to extend them).

6. When should you look at the XML?

Most of the time, you can treat the XML as an implementation detail and just:

  • Use viva-camctl for manual experimentation.
  • Use genicam’s NodeMap accessors from Rust.

You should crack open the XML when:

  • A feature behaves differently from the SFNC documentation.
  • Selectors are not doing what you expect.
  • You hit a SwissKnife or bitfield corner case.
  • You are adding support for a new vendor-specific wrinkle to viva-genapi.

Typical workflow:

  1. Use your transport or genicam helper to dump the XML to a file.
  2. Run parse_into_minimal_nodes to quickly confirm schema and top-level layout.
  3. Run the “full” XML → XmlModel path (via the crate internals) when working on viva-genapi changes.
  4. Use a normal XML editor / viewer when manually exploring categories and features.

7. Recap

After this tutorial you should:

  • Know what the GenICam XML is and how it relates to features and registers.
  • Understand how genapi-xml uses FirstURL and a memory reader closure to retrieve the XML document from the device.
  • Be able to write a small Rust helper that:
  • Fetches the XML with fetch_and_load_xml.
  • Inspects basic metadata with parse_into_minimal_nodes.
  • Know when it is worth digging into XML versus staying at the feature level.

Next up: Streaming — actually getting image data out of the camera, now that you know how its configuration is described.

Streaming

Goal of this tutorial:

  • Start a GVSP stream from your camera.
  • See how to:
    • Run streaming from the viva-camctl CLI.
    • Run a basic streaming example from the viva-genicam crate.
  • Understand the key knobs for stability:
    • Packet size / MTU
    • Packet delay
    • Resends and backpressure

You should already have:

  • Completed Discovery and know:
    • The camera IP address (e.g. 192.168.0.10).
    • The host NIC / interface used for the camera (e.g. 192.168.0.5).
  • Ideally gone through Registers & features so you can configure basic camera settings.

1. Basics: how GVSP streaming works

Very simplified:

  1. On the control path (GVCP / GenCP), you configure:
    • Pixel format, ROI, exposure, etc.
    • Streaming destination (host IP / port).
    • Whether the camera uses resends, chunk data, etc.
  2. When you tell the camera to start acquisition, it begins sending:
    • GVSP data packets (your image payload).
    • Occasionally leader/trailer or event packets, depending on mode.
  3. The host reassembles packets into complete frames, handles resends and timeouts, and exposes a stream of “frames + stats” to you.

The viva-gige crate owns the low-level GVSP packet handling. The viva-genicam crate builds on that to present a higher-level streaming API. viva-camctl wraps viva-genicam in a CLI.


2. Streaming with viva-camctl

The exact flags may evolve; always check:

cargo run -p viva-camctl -- stream --help

for the authoritative list. The examples below illustrate the typical usage pattern.

2.1. Start a basic stream

Start a stream from a camera at 192.168.0.10 using the host interface 192.168.0.5:

cargo run -p viva-camctl -- \
  stream --ip 192.168.0.10 --iface 192.168.0.5

What you should expect:

  • A textual status showing:
  • Frames received.
  • Drops / incomplete frames.
  • Resend statistics (if the camera supports resends).
  • Measured throughput (MB/s or similar).
  • The tool may run until you interrupt it (Ctrl+C), or it may have:
  • A –count option (receive N frames).
  • A –duration option (run for N seconds).

If you see no frames:

  • Double-check that streaming is enabled on the camera.
  • Ensure you haven’t configured a different destination IP / port in a vendor tool.
  • Make sure the iface IP you pass is the one the camera can reach.

2.2. Saving frames to disk

Many users want to save frames as a quick sanity check or for offline analysis. If viva-camctl stream exposes options like –output / –dir / –save, use them; for example:

cargo run -p viva-camctl -- \
  stream --ip 192.168.0.10 --iface 192.168.0.5 \
  --count 100 --output ./frames

Typical behaviour:

  • Create a directory.
  • Save each frame as:
  • Raw bytes (e.g. .raw), or
  • PGM/PPM (.pgm / .ppm), or
  • Some simple container format.

If you are unsure which formats are supported, check –help or the viva-camctl crate documentation.

Saved frames are useful to:

  • Inspect pixel data in an image viewer or with Python/OpenCV.
  • Compare against the vendor’s viewer for debugging.

3. Streaming from Rust using genicam

The genicam crate usually offers one or more streaming examples (search for stream_ in crates/viva-genicam/examples/).

Run the simplest one, for example:

cargo run -p viva-genicam --example stream_basic

(If the actual example name differs, adapt accordingly.)

What such an example typically does:

  1. Open a device (by IP or index).
  2. Configure basic streaming parameters if needed (pixel format, ROI, exposure).
  3. Build a stream (e.g. using a StreamBuilder or similar).
  4. Start acquisition and iterate over frames in a loop.
  5. Print per-frame stats or a summary.

A typical pseudo-flow (simplified, not exact code):

// Pseudocode sketch — see the actual example for real API
use genicam::prelude::*;

fn main() -> anyhow::Result<()> {
    // 1. Context and device
    let mut ctx = Context::new()?;
    let mut dev = ctx.open_by_ip("192.168.0.10".parse()?)?;

    // 2. Optional: tweak features before streaming
    let mut nodemap = dev.nodemap()?;
    nodemap.set_enum("PixelFormat", "Mono8")?;
    nodemap.set_float("AcquisitionFrameRate", 30.0)?;

    // 3. Build a stream
    let mut stream = dev.build_stream()?.start()?;

    // 4. Receive frames in a loop
    for (i, frame) in stream.iter().enumerate() {
        let frame = frame?;
        println!(
            "Frame #{i}: {} x {}, ts={:?}, drops={} resends={}",
            frame.width(),
            frame.height(),
            frame.timestamp(),
            frame.stats().dropped_frames,
            frame.stats().resends,
        );

        if i >= 99 {
            break;
        }
    }

    Ok(())
}

Use the real example as the ground truth for API names and error handling.

4. Tuning for stability and performance

Streaming is where GigE Vision setup matters most. A few knobs you will encounter (some via camera features, some via host configuration):

4.1. Packet size and MTU

  • Packet size too large for your NIC / path MTU:
  • Packets get fragmented or dropped.
  • High drop/resend counts.
  • Packet size too small:
  • More packets per frame → more overhead.
  • Higher chance of bottleneck at CPU or driver level.

Typical approach:

  • Enable jumbo frames on the camera network (e.g. MTU 9000) if your switch/NIC support it.
  • Set camera packet size slightly below MTU (e.g. 8192 for MTU 9000).
  • Observe throughput and drop/resend statistics.

4.2. Packet delay (inter-packet gap)

Some cameras allow setting an inter-packet delay or packet interval:

  • Too little delay:
  • Bursty traffic, easily overloading switches, NICs, or host buffers.
  • Modest delay:
  • Smoother traffic at the cost of slightly higher end-to-end latency.

If your stats show frequent drops/resends at high frame rates:

  • Try increasing the packet delay slightly.
  • Monitor if drops/resends go down while throughput remains acceptable.

4.3. Resends and backpressure

GVSP supports packet resends:

  • The host tracks missing packets in a frame.
  • It requests resends from the camera.
  • The camera re-sends the missing packets.

The viva-gige layer surfaces statistics like:

  • Dropped packets.
  • Number of resend requests.
  • Number of resent packets actually received.

Use these metrics to:

  • Detect whether your current network configuration is “healthy”.
  • Compare different NICs, cables, or switches.

5. Troubleshooting streaming issues

If streaming starts but is unreliable, here is a practical checklist:

  1. Packet drops / resends spike immediately
    • Check MTU and packet size alignment.
    • Try lowering frame rate or resolution temporarily.
    • Use a dedicated NIC and switch if possible.
  2. No frames arrive, but discovery and feature access work
    • Confirm the camera is configured to send to your host IP / port.
    • Ensure no other tool (vendor viewer) is already consuming the stream.
    • Double-check any firewall rules that might block UDP on the stream port.
  3. Frames arrive but with wrong size or format
    • Verify PixelFormat and ROI in the NodeMap (viva-camctl get / set).
    • Confirm your code interprets the buffer layout correctly (Mono8 vs Bayer).
  4. Intermittent hiccups under load
    • Look at CPU usage and other traffic on the same NIC.
    • Consider enabling jumbo frames and increasing packet delay.
    • On Windows, ensure high-performance power profile and up-to-date NIC drivers.

When in doubt:

  • Save a small sequence of frames to disk.
  • Capture logs at a higher verbosity (e.g. RUST_LOG=debug).
  • Compare behaviour with the vendor’s viewer using the same network setup.

6. Recap

After this tutorial you should be able to:

  • Start a GVSP stream using viva-camctl.
  • Run a streaming example from the viva-genicam crate.
  • Interpret basic streaming stats (frames, drops, resends, throughput).
  • Know which knobs to tweak first when streaming is unreliable:
  • MTU, packet size, packet delay, frame rate, dedicated NIC.

For more detailed background on how GVCP/GVSP packets are handled internally, see the viva-gigecrate chapter.

Next steps:

  • Networking — a more systematic look at NIC configuration, MTU, and common deployment topologies.
  • Later: dedicated crate chapters for viva-gige and viva-genicam for contributor-level details.

Testing without hardware

This tutorial shows how to evaluate the full viva-genicam stack without physical cameras or external tools. The viva-fake-gige crate provides an in-process GigE Vision camera simulator that speaks real GVCP/GVSP protocols on localhost.

Quick start

# Run the self-contained demo
cargo run -p viva-genicam --example demo_fake_camera

Expected output:

Starting fake GigE Vision camera on 127.0.0.1:3956 ...
  Fake camera is running.

Discovering cameras (2 s timeout) ...
  Found 1 device(s):
    IP: 127.0.0.1  Model: FakeGigE  Manufacturer: viva-genicam

Connecting to 127.0.0.1 ...
  Connected. GenApi XML: 5788 bytes, 20 features.

Reading camera features:
  Width = 640
  Height = 480
  PixelFormat = Mono8
  ExposureTime = 5000
  Gain = 0
  GevTimestampTickFrequency = 1000000000

Setting Width = 320, ExposureTime = 10000 ...
  Width readback = 320

Streaming 5 frames ...
  Frame 1: 320x480 Mono8 payload=153600B ts=7393542
  Frame 2: 320x480 Mono8 payload=153600B ts=113549417
  ...

Demo complete. All operations succeeded without hardware.

What the fake camera supports

FeatureStatus
GVCP discovery (broadcast on loopback)Supported
GenCP register read/write (READREG, WRITEREG, READMEM, WRITEMEM)Supported
Control Channel Privilege (CCP)Supported
GenApi XML with SFNC featuresWidth, Height, PixelFormat, ExposureTime, Gain
GVSP frame streamingSynthetic gradient images at configurable FPS
Device timestamps (1 GHz tick rate)Supported (ns since acquisition start)
Timestamp latch (GevTimestampValue)Supported
Chunk data (Timestamp, ExposureTime)Supported when ChunkModeActive=1

Running integration tests

All integration tests use the fake camera automatically:

# Full workspace test suite (includes fake camera tests)
cargo test --workspace

# Just the camera integration tests (12 tests)
cargo test -p viva-genicam --test fake_camera

# Zenoh service end-to-end tests (3 tests)
cargo test -p viva-service --test fake_camera_e2e

Using the fake camera in your own code

Add viva-fake-gige as a dev-dependency:

[dev-dependencies]
viva-fake-gige = { git = "https://github.com/VitalyVorobyev/viva-genicam" }

Start a fake camera in your test:

#![allow(unused)]
fn main() {
use viva_fake_gige::FakeCamera;

#[tokio::test]
async fn my_camera_test() {
    // Start a fake camera on loopback
    let _camera = FakeCamera::builder()
        .width(1024)
        .height(768)
        .fps(15)
        .bind_ip([127, 0, 0, 1].into())
        .port(3956)
        .build()
        .await
        .expect("failed to start fake camera");

    // Now use viva_genicam::gige::discover_all() to find it,
    // connect_gige() to connect, and FrameStream to stream.
}
}

Running as a standalone server

The viva-fake-gige binary starts a long-running fake camera that stays alive until Ctrl+C. This is the recommended way to test interactively with viva-camctl or viva-service + genicam-studio.

# Terminal 1: start the fake camera
cargo run -p viva-fake-gige

# Custom dimensions and frame rate
cargo run -p viva-fake-gige -- --width 512 --height 512 --fps 15

Output:

Fake camera running on 127.0.0.1:3956 (640x480 Mono8 @ 30 fps)
Press Ctrl+C to stop.

Using the CLI with the fake camera

With the fake camera running in Terminal 1, use viva-camctl in Terminal 2:

# Discover (use --iface to include loopback)
cargo run -p viva-camctl -- list --iface 127.0.0.1

# Read / write features
cargo run -p viva-camctl -- get --ip 127.0.0.1 --name Width
cargo run -p viva-camctl -- set --ip 127.0.0.1 --name Width --value 512
cargo run -p viva-camctl -- get --ip 127.0.0.1 --name DeviceModelName

E2E testing with genicam-studio

The full stack test uses 3 terminals:

# Terminal 1: fake camera
cargo run -p viva-fake-gige

# Terminal 2: camera service (bridges camera to Zenoh)
# The --zenoh-config is required so studio can connect via TCP
cargo run -p viva-service -- \
  --iface lo0 \
  --zenoh-config ../genicam-studio/config/zenoh-local.json5
# On Linux: --iface lo

# Terminal 3: studio app (auto-loads config/zenoh-studio.json5)
cd ../genicam-studio/apps/genicam-studio-tauri
cargo tauri dev

The studio will discover the fake camera, show its feature tree, and stream gradient images in the viewer. See genicam-studio/docs/manual-e2e-test.md for the full test checklist.

Fake camera configuration

The FakeCameraBuilder supports:

#![allow(unused)]
fn main() {
use viva_fake_gige::FakeCamera;
async fn example() {
let camera = FakeCamera::builder()
    .width(1920)       // Image width (default: 640)
    .height(1080)      // Image height (default: 480)
    .fps(60)           // Target frame rate (default: 30)
    .bind_ip([127, 0, 0, 1].into())  // Bind address (default: 127.0.0.1)
    .port(3956)        // GVCP port (default: 3956)
    .build()
    .await
    .unwrap();
}
}

Image dimensions and exposure time can also be changed at runtime through GenApi register writes – the fake camera responds to Width, Height, ExposureTime, and Gain register writes just like a real camera.

Python bindings

The viva-genicam Python package wraps the Rust workspace behind a NumPy-friendly API. It ships as a pre-built wheel on PyPI — no C toolchain, no aravis, libusb is statically bundled.

pip install viva-genicam
import viva_genicam as vg

cams = vg.discover(timeout_ms=500)
cam = vg.connect_gige(cams[0])
print(cam.get("DeviceModelName"))

with cam.stream() as frames:
    for frame in frames:
        arr = frame.to_numpy()           # NumPy (H, W) or (H, W, 3) uint8
        break

Tutorials

  1. Install & hello-camera — install the wheel, run the self-contained fake-camera demo.
  2. Discovery — enumerate GigE and U3V cameras, restrict to one NIC, auto-detect interfaces.
  3. Control & introspection — read and write features, walk the NodeMap, discover which features apply.
  4. Streaming — context-manager streams, NumPy frames, pixel formats, timestamps.

Reference

  • API reference — every public class, function, and exception in one place.
  • Example scripts — runnable Python files mirroring the most common Rust examples.

Supported

  • Python 3.9+, abi3 wheels (one wheel covers every minor version).
  • GigE Vision: discovery, control, streaming, chunks, events, time sync.
  • USB3 Vision: discovery, control, streaming.
  • Platforms with pre-built wheels: Linux x86_64 (manylinux_2_28), macOS arm64, Windows x86_64.

Need another platform? The sdist on PyPI builds from source — you’ll need a Rust toolchain (rustup) and a C compiler. libusb is always statically vendored; no system package needed.

Install & hello-camera

Install from PyPI

pip install viva-genicam

Wheels ship for:

OSArchPython
Linux (manylinux_2_28)x86_643.9+ (abi3)
macOSarm643.9+ (abi3)
Windowsx86_643.9+ (abi3)

libusb is statically linked into the extension module — no need to apt install libusb-1.0-0-dev or brew install libusb on the install side.

If you are on a platform without a pre-built wheel, pip falls back to the sdist; you will need a Rust toolchain (rustup) and a C compiler installed.

Verify the install

import viva_genicam as vg
print(vg.__version__)
print(vg.discover(timeout_ms=300))

If no cameras are physically connected, you should see an empty list — not an exception.

Hello camera — no hardware needed

The wheel ships an in-process fake GigE Vision camera. Just run:

import viva_genicam as vg
from viva_genicam.testing import FakeGigeCamera

with FakeGigeCamera(width=640, height=480, fps=10) as fake:
    cam = vg.connect_gige(fake.device_info())
    print(cam.get("DeviceModelName"))
    with cam.stream() as frames:
        frame = frames.next_frame(timeout_ms=5000)
        print(frame.width, frame.height, frame.pixel_format)

No clone, no cargo build, no subprocess — the fake camera lives inside the same process as your script.

For a fuller end-to-end walkthrough, the repo ships a runnable example:

python crates/viva-pygenicam/examples/demo_fake_camera.py

Expected output:

1. Starting in-process fake GigE camera ...
   bound to 127.0.0.1:3956
2. Discovering ...
   found FakeGigE @ 127.0.0.1
3. Connecting ...
   connected; XML is 16115 bytes, 53 features
4. Reading features:
   Width          = 640
   Height         = 480
   ...
6. Streaming 5 frames ...
   frame 1: 640x480 Mono8  numpy shape=(480, 640) dtype=uint8
   ...
Demo complete — everything ran without any real hardware.

This covers discovery, connection, feature read/write, and streaming — the full surface you will use with a real camera.

Next

Discovery — enumerate cameras with interface control.

Discovery

GigE and USB3 Vision have separate discovery pipelines. Both return frozen dataclasses you can pass straight to connect_gige / connect_u3v.

GigE Vision

import viva_genicam as vg

cams = vg.discover(timeout_ms=500)
for c in cams:
    print(c.ip, c.mac, c.model, c.manufacturer)

vg.discover() sends a GVCP DISCOVERY_CMD broadcast on the default outbound interface and collects ack packets for timeout_ms milliseconds. Returns a list of GigeDeviceInfo:

@dataclass(frozen=True)
class GigeDeviceInfo:
    ip: str                        # "192.168.1.42"
    mac: str                       # "DE:AD:BE:EF:CA:FE"
    manufacturer: Optional[str]
    model: Optional[str]
    transport: Literal["gige"]

Restrict to one NIC

cams = vg.discover(timeout_ms=500, iface="en0")

Use this when the host has multiple NICs and you only want to broadcast out one of them.

Scan every NIC

cams = vg.discover(timeout_ms=500, all=True)

Enumerates every local interface, broadcasts on each, and merges the results. This is what you want on a developer machine where you may not know ahead of time which NIC the camera is on.

Slower cameras

Some cameras are slow to reply or sit on busy networks. Bump the timeout:

cams = vg.discover(timeout_ms=3000, all=True)

USB3 Vision

cams = vg.discover_u3v()
for c in cams:
    print(f"vid:pid=0x{c.vendor_id:04x}:0x{c.product_id:04x}")
    print(f"  bus={c.bus} addr={c.address}")
    print(f"  model={c.model}  serial={c.serial}")

vg.discover_u3v() enumerates USB devices whose interface descriptors match the USB3 Vision class/subclass/protocol triple. Returns a list of U3vDeviceInfo:

@dataclass(frozen=True)
class U3vDeviceInfo:
    bus: int
    address: int
    vendor_id: int
    product_id: int
    serial: Optional[str]
    manufacturer: Optional[str]
    model: Optional[str]
    transport: Literal["u3v"]

USB discovery is synchronous (there is no timeout_ms knob) and does not require any broadcast.

Connecting

Either DeviceInfo type can be passed directly:

cam = vg.connect_gige(cams[0])                 # GigE
cam = vg.connect_u3v(u3v_cams[0])              # U3V
cam = vg.Camera.open(cams[0])                  # dispatches on info type

connect_gige accepts an optional iface= override if you know which NIC should stream from the camera:

cam = vg.connect_gige(info, iface="en0")

When omitted, the stream interface is auto-resolved by matching the camera IP against every local NIC’s subnet. For loopback (e.g. the fake camera) this resolves to lo/lo0 automatically.

Next

Control & introspection — read and write features, walk the NodeMap.

Control & introspection

Reading and writing features

Camera.get(name) returns the value as a string, formatted per the node’s type. Camera.set(name, value) parses the string according to the node type and writes it.

cam.get("ExposureTime")         # "5000"
cam.get("PixelFormat")          # "Mono8"
cam.get("Width")                # "640"

cam.set("Width", "320")
cam.set("PixelFormat", "Mono8")
cam.set("ExposureTime", "7500.0")

Typed helpers

Two SFNC-standard features have dedicated float setters so you don’t pass numbers as strings:

cam.set_exposure_time_us(10_000.0)
cam.set_gain_db(6.0)

These resolve the canonical SFNC name (ExposureTime / Gain) and fall back to common vendor aliases; use them when you want to be resilient to small XML differences.

Error model

Every control error raises a subclass of vg.GenicamError:

try:
    cam.set("Width", "not-a-number")
except vg.ParseError as e:
    print("bad input:", e)
except vg.GenApiError as e:
    print("nodemap rejected the write:", e)
except vg.TransportError as e:
    print("register I/O failed:", e)
ExceptionWhen
GenApiErrorNodemap evaluation: unknown feature, value out of range, predicate failed
TransportErrorGVCP/USB register read or write failed
ParseErrorUser-supplied value couldn’t be parsed per the node’s type
MissingChunkFeatureErrorChunk selector not present in the camera’s XML
UnsupportedPixelFormatErrorNo RGB conversion path for the reported pixel format

All inherit from GenicamError so except vg.GenicamError: catches everything.

Introspection

List features

cam.nodes()            # ['AcquisitionStart', 'ExposureTime', ... 53 entries]

Node metadata

info = cam.node_info("ExposureTime")
print(info.kind)         # "Float"
print(info.access)       # "RW"
print(info.visibility)   # "Beginner"
print(info.description)  # "Exposure time of the sensor in microseconds."
print(info.writable)     # True
print(info.readable)     # True

NodeInfo fields:

  • name — feature name
  • kind"Integer", "Float", "Enumeration", "Boolean", "Command", "Category", "SwissKnife", "Converter", "IntConverter", "StringReg"
  • access"RO", "RW", "WO", or None (for categories)
  • visibility"Beginner", "Expert", "Guru", "Invisible"
  • display_name, description, tooltip

Plus two convenience properties: readable (access in {"RO","RW"}) and writable (access in {"RW","WO"}).

Enum entries

cam.enum_entries("PixelFormat")
# ['Mono8', 'Mono16', 'BayerRG8', 'RGB8Packed']

Categories

cats = cam.categories()
for cat, children in cats.items():
    print(cat, "->", children)

The categories map mirrors the GenICam XML category tree; each value is the list of child feature names. Use this to render a tree UI or to filter features by area (acquisition, image format, device control, etc.).

All node metadata at once

for info in cam.all_node_info():
    print(info.name, info.kind, info.access)

Useful for exporting a CSV, auto-generating GUI forms, or diffing two cameras’ feature surfaces.

Acquisition control

Without streaming (for example, trigger-mode tests):

cam.acquisition_start()
# ... do something that causes frames to be produced on another channel ...
cam.acquisition_stop()

When you use with cam.stream() as frames: the stream context manager calls these for you on entry/exit. Don’t call them manually if you are using stream().

Raw XML

print(cam.xml[:500])     # first 500 chars of the GenICam XML

Handy for feeding into a GenICam tool, debugging a mystery feature, or archiving the exact schema a camera presented at connect time.

Next

Streaming — sync iterator, NumPy frames, timestamps.

Streaming

Camera.stream() returns a context manager that starts acquisition on entry, stops it on exit, and yields Frame objects while it’s open:

with cam.stream() as frames:
    for frame in frames:
        arr = frame.to_numpy()
        ...

No asyncio required — the underlying tokio runtime is managed inside the extension, and iteration blocks the calling thread while other Python threads stay runnable (the GIL is released during the blocking read).

The Frame object

frame.width              # int
frame.height             # int
frame.pixel_format       # "Mono8", "Mono16", "BayerRG8", "RGB8Packed", ...
frame.pixel_format_code  # raw PFNC integer
frame.ts_dev             # device tick count, Optional[int]
frame.ts_host            # POSIX seconds, Optional[float] (only if time-synced)
frame.payload()          # raw bytes, one copy

to_numpy() — natural shape

arr = frame.to_numpy()
Pixel formatArray shapedtype
Mono8(H, W)uint8
Mono16(H, W)uint16
RGB8Packed(H, W, 3)uint8
BGR8Packed, BayerRG8, BayerGB8, BayerBG8, BayerGR8(H, W, 3)uint8 (auto-demosaiced / reordered)
anything else(N,) rawuint8

Demosaicing is a simple nearest-neighbour kernel inside the Rust to_rgb8() path — fine for preview, not a substitute for an ISP.

to_rgb8() — always RGB

rgb = frame.to_rgb8()     # always (H, W, 3) uint8

Useful when you want one code path regardless of the camera’s pixel format.

Raw bytes

frame.payload()           # bytes, copy of the whole GVSP payload

Use this when you need to feed bytes to another decoder or serialize to disk as-is.

Streaming options

The stream() call accepts GigE-specific knobs:

cam.stream(
    iface="en0",               # NIC override
    auto_packet_size=True,     # negotiate the largest packet that fits MTU
    multicast="239.255.42.99", # subscribe to a multicast group instead of unicast
    destination_port=34567,    # fix the streaming UDP port
)

None of these are required. iface= is auto-resolved by subnet match if you omit it; the rest fall back to the camera’s defaults.

For U3V cameras all options are silently ignored.

Timeouts and ending a stream

Iteration blocks until a frame arrives. To time out a single read:

with cam.stream() as frames:
    frame = frames.next_frame(timeout_ms=1000)
    if frame is None:
        print("stream ended cleanly")
    else:
        ...

next_frame() returns None when the stream closes cleanly, or raises TransportError on timeout / network failure. The for frame in frames path uses a 5-second default timeout that raises on expiry.

Exit the with block to stop acquisition and release the socket / USB endpoint. You can also call frames.close() explicitly if you stored the iterator outside a with statement.

Complete example: save 5 frames as PNGs

import viva_genicam as vg
from PIL import Image

cam = vg.connect_gige(vg.discover(timeout_ms=500)[0])

with cam.stream() as frames:
    for i, frame in enumerate(frames, 1):
        Image.fromarray(frame.to_numpy()).save(f"frame_{i:03d}.png")
        if i >= 5:
            break

Identical in spirit to the Rust grab_gige example, 12 lines of Python.

Next

API reference — every public symbol, in one page.

API reference

Every public symbol exported from viva_genicam.

Discovery

vg.discover(timeout_ms=500, iface=None, all=False) -> list[GigeDeviceInfo]
vg.discover_u3v() -> list[U3vDeviceInfo]
@dataclass(frozen=True)
class GigeDeviceInfo:
    ip: str
    mac: str
    manufacturer: Optional[str]
    model: Optional[str]
    transport: Literal["gige"]

@dataclass(frozen=True)
class U3vDeviceInfo:
    bus: int
    address: int
    vendor_id: int
    product_id: int
    serial: Optional[str]
    manufacturer: Optional[str]
    model: Optional[str]
    transport: Literal["u3v"]

Both dataclasses expose .to_dict() for JSON-friendly export.

DeviceInfo is the Union[GigeDeviceInfo, U3vDeviceInfo] alias.

Connection

vg.connect_gige(device_info: GigeDeviceInfo, iface: Optional[str] = None) -> Camera
vg.connect_u3v(device_info: U3vDeviceInfo) -> Camera
vg.Camera.open(device_info, **kwargs) -> Camera   # dispatches on type

Camera

class Camera:
    transport: str                           # "gige" or "u3v"
    xml: str                                 # raw GenICam XML

    def get(self, name: str) -> str: ...
    def set(self, name: str, value: str) -> None: ...
    def set_exposure_time_us(self, value: float) -> None: ...
    def set_gain_db(self, value: float) -> None: ...
    def enum_entries(self, name: str) -> list[str]: ...

    def nodes(self) -> list[str]: ...
    def node_info(self, name: str) -> Optional[NodeInfo]: ...
    def all_node_info(self) -> list[NodeInfo]: ...
    def categories(self) -> dict[str, list[str]]: ...

    def acquisition_start(self) -> None: ...
    def acquisition_stop(self) -> None: ...

    def stream(
        self,
        iface: Optional[str] = None,
        auto_packet_size: Optional[bool] = None,
        multicast: Optional[str] = None,
        destination_port: Optional[int] = None,
    ) -> FrameStream: ...

NodeInfo

class NodeKind(str, Enum):
    INTEGER       = "Integer"
    FLOAT         = "Float"
    ENUMERATION   = "Enumeration"
    BOOLEAN       = "Boolean"
    COMMAND       = "Command"
    CATEGORY      = "Category"
    SWISS_KNIFE   = "SwissKnife"
    CONVERTER     = "Converter"
    INT_CONVERTER = "IntConverter"
    STRING_REG    = "StringReg"

@dataclass(frozen=True)
class NodeInfo:
    name: str
    kind: str
    access: Optional[str]              # "RO" | "RW" | "WO" | None
    visibility: str                    # "Beginner" | "Expert" | "Guru" | "Invisible"
    display_name: Optional[str]
    description: Optional[str]
    tooltip: Optional[str]

    @property
    def readable(self) -> bool: ...
    @property
    def writable(self) -> bool: ...
    def to_dict(self) -> dict: ...

FrameStream

class FrameStream:
    def __enter__(self) -> "FrameStream": ...      # calls acquisition_start()
    def __exit__(self, *exc) -> None: ...           # calls acquisition_stop() + close()
    def __iter__(self) -> Iterator[Frame]: ...
    def __next__(self) -> Frame: ...                # 5-second default timeout
    def next_frame(self, timeout_ms: Optional[int] = None) -> Optional[Frame]: ...
    def close(self) -> None: ...

Frame

class Frame:
    width: int
    height: int
    pixel_format: str
    pixel_format_code: int
    ts_dev: Optional[int]
    ts_host: Optional[float]

    def payload(self) -> bytes: ...
    def to_numpy(self) -> numpy.ndarray: ...        # natural shape per pixel format
    def to_rgb8(self) -> numpy.ndarray: ...         # always (H, W, 3) uint8

Exceptions

GenicamError                      # base class
├── GenApiError
├── TransportError
├── ParseError
├── MissingChunkFeatureError
└── UnsupportedPixelFormatError

All raised by the camera / frame APIs inherit from GenicamError, so one except vg.GenicamError: catches every bindings-level failure.

Networking

This chapter is a practical GigE Vision networking cookbook.

It focuses on:

  • Typical topologies (direct cable vs switch, single vs multi-camera).
  • NIC and IP configuration on Windows, Linux, and macOS.
  • MTU / jumbo frames and packet delay basics.
  • Common pitfalls and troubleshooting.

It is not a replacement for vendor or A3 documentation, but gives you enough background to make viva-camctl and the viva-genicam examples work reliably. oai_citation:0‡Wikipedia

If you have not yet done so, first go through:

They show the CLI and Rust-side pieces that depend on a working network setup.


1. Typical topologies

1.1. Single camera, direct connection

The simplest and most robust setup:

[Camera]  <── Ethernet cable ──>  [Host NIC]

Characteristics:

  • One camera, one host, one NIC.
  • No other traffic on that link.
  • Easy to reason about MTU and packet delay.

Recommended when:

  • You’re bringing up a new camera.
  • You’re debugging issues and want to remove variables.

1.2. One or more cameras through a switch

Common in real systems:

[Cam A] ──\
           \
[Cam B] ────[Switch]──[Host NIC]
           /
[Cam C] ─/

Characteristics:

  • Multiple cameras share the link to the host.
  • Switch must handle the aggregate throughput.
  • Switch configuration (buffer sizes, jumbo frames, spanning tree) matters. 

Recommended when:

  • You need more than one camera.
  • You need long cable runs or multi-drop layouts.

1.3. Host with multiple NICs

For high throughput or separation from office traffic:

[Cam network]  <── NIC #1 ──>  [Host]  <── NIC #2 ──>  [Office / internet]

Characteristics:

  • Camera traffic isolated from general network.
  • Easier to tune MTU, QoS, and firewall rules.
  • In discovery and streaming, you may need to specify –iface .

Recommended for:

  • High data rates.
  • Multi-camera setups.
  • Systems that must not be disturbed by office network traffic.

2. IP addressing basics

GigE Vision uses standard IPv4 + UDP. Each device needs a valid IPv4 address; the host and camera(s) must share a subnet. 

2.1. Choose a camera subnet

Pick a private network, for example:

  • 192.168.0.0/24 (addresses 192.168.0.1–192.168.0.254)
  • 10.0.0.0/24

Decide on:

  • One address for your host NIC (e.g. 192.168.0.5).
  • One address per camera (e.g. 192.168.0.10, 192.168.0.11, …).

Make sure this subnet does not conflict with your office / internet network.

2.2. Windows

  1. Open Network & Internet Settings → Change adapter options.
  2. Right-click the NIC used for cameras → Properties.
  3. Select Internet Protocol Version 4 (TCP/IPv4) → Properties.
  4. Choose Use the following IP address:
    • IP address: e.g. 192.168.0.5
    • Subnet mask: 255.255.255.0
    • Gateway: leave empty (for isolated camera networks).
  5. Turn off any “energy saving” features for this NIC in the driver settings if possible (they can introduce latency/jitter).

On first run, Windows firewall may pop up asking whether to allow the binary on Private / Public networks. Allow it on the relevant profile so UDP broadcasts work.

2.3. Linux

Use either NetworkManager or manual configuration.

Manual example:

# Assign IP and bring interface up (replace eth1 with your device)
sudo ip addr add 192.168.0.5/24 dev eth1
sudo ip link set eth1 up

To make this permanent, use your distro’s network configuration tools (e.g. Netplan on Ubuntu, ifcfg files on RHEL, etc.).

2.4. macOS

Use System Settings → Network:

  1. Select the camera NIC (e.g. USB Ethernet).
  2. Set “Configure IPv4” to “Manually”.
  3. Enter:
    • IP address: 192.168.0.5
    • Subnet mask: 255.255.255.0
  4. Leave router/gateway empty for a dedicated camera network.

3. MTU and jumbo frames

MTU (Maximum Transmission Unit) determines the largest Ethernet frame size. Standard MTU is 1500 bytes; jumbo frames extend this (e.g. 9000 bytes). For large images, jumbo frames can significantly reduce protocol overhead and CPU load. 

3.1. When to care

You probably need to look at MTU when:

  • Frame sizes are large (multi-megapixel).
  • Frame rates are high (tens or hundreds of FPS).
  • You see lots of packet drops or resends at otherwise reasonable loads.

For simple bring-up and low/moderate data rates, standard MTU=1500 usually works.

3.2. Enabling jumbo frames

All components in the path must agree:

  • Camera
  • Switch (if present)
  • Host NIC

Typical steps:

  • Camera: set GevSCPSPacketSize or similar feature to a value below the path MTU (e.g. 8192 for MTU 9000). You can use viva-camctl set to adjust this.
  • Switch: enable jumbo frames in the management UI (name and steps vary by vendor).
  • Host NIC:
    • Windows: NIC properties → Advanced → Jumbo Packet or similar.
    • Linux: sudo ip link set dev eth1 mtu 9000
    • macOS: some drivers expose MTU setting in the network settings; others do not support jumbo frames.

After changing MTU, confirm with:

# Linux example
ip link show eth1

and check that TX/RX MTU matches your expectation.

4. Packet delay and flow control

Some cameras allow configuring inter-packet delay or packet interval:

  • Without delay:
    • Camera sends packets as fast as possible.
    • High instantaneous bursts can overwhelm NICs / switches.
  • With modest delay:
    • Traffic is smoother at the cost of a small increase in latency.

If you see high packet loss or many resends at high frame rates:

  1. Try slightly increasing the inter-packet delay.
  2. Observe:
    • Does the drop/resend rate decrease?
    • Is overall throughput still sufficient?

Some vendors also expose “frame rate limits” or “burst size” options. These can also be used to ease pressure on the network at the cost of lower peak FPS. 

5. Multi-camera considerations

When running multiple cameras:

  • Total throughput is roughly the sum of each camera’s stream.
  • The bottleneck can be:
    • The switch’s uplink to the host.
    • The host NIC’s capacity.
    • Host CPU / memory bandwidth.

Practical tips:

  • Prefer a dedicated NIC for cameras.
  • For 2–4 high-speed cameras, consider:
    • Multi-port NICs.
    • Separating cameras onto different NICs if possible.
  • Stagger packet timing:
    • Slightly different inter-packet delays for each camera.
    • Slightly different frame rates, where acceptable.

Monitor:

  • Per-camera stats (drops, resends, throughput).
  • Host CPU usage.
  • Switch port statistics if your hardware exposes them.

6. Using –iface and discovery quirks

On systems with more than one active NIC, automatic interface selection might pick the wrong one.

  • In viva-camctl, use –iface to force the correct NIC.
  • In Rust examples, pass the desired local address when building the context or stream (see the genicam and viva-gige crate chapters for details).

If discovery only works when you specify –iface, but not without it:

  • You likely have:
    • Multiple NICs on overlapping subnets, or
    • A default route that prefers a different interface.
  • This is not unusual; be explicit for production setups.

7. Troubleshooting checklist

Use this checklist when things don’t work as expected.

7.1. Discovery fails

See also the troubleshooting section in Discovery.

  • Check link LEDs on camera, switch, and NIC.
  • Confirm IP addressing:
    • Host and camera on same subnet.
    • No conflicting IPs.
  • Check firewall:
    • Allow UDP broadcast / unicast on the camera NIC.
  • Temporarily:
    • Disable other NICs to simplify routing.
    • Try a direct cable instead of a switch.

7.2. Streaming is unstable (drops / resends)

  • Check MTU vs packet size; avoid exceeding path MTU.
  • For high data rates:
    • Enable jumbo frames end-to-end (camera, switch, NIC).
  • Reduce stress:
    • Lower frame rate or ROI.
    • Increase inter-packet delay slightly.
  • Ensure dedicated NIC and switch where possible.
  • Watch host CPU; if it’s near 100%, consider:
    • Better NIC / driver.
    • Moving processing off to another thread / core.

7.3. Vendor tool works, viva-genicam does not

Compare:

  • Which NIC / IP the vendor tool uses.
    • The camera’s configured stream destination (IP/port).
    • The vendor tool might:
  • Use a different MTU / packet size.
    • Adjust inter-packet delay automatically.
    • Try to replicate those parameters with viva-camctl and the NodeMap.

  1. Recap

After this chapter you should: • Understand basic GigE Vision network topologies and when to use each. • Be able to configure a host NIC and camera addresses on Windows, Linux, and macOS. • Know when and how to enable jumbo frames and adjust packet delay. • Have a structured approach to debugging discovery and streaming issues.

For protocol-level details and tuning options exposed by this project: • See viva-gige for transport internals. • See the Streaming tutorial for concrete CLI and Rust examples.


Error Handling & Logging

Error Types

Each crate defines its own error type:

  • GenicamError – high-level facade errors
  • GigeError – GVCP/GVSP transport errors
  • GenApiError – node evaluation and register I/O errors
  • GenCpError – GenCP protocol encoding errors
  • XmlError – XML parsing errors

All error types implement std::error::Error and Display.

Logging

The workspace uses the tracing crate for structured logging. Enable it with:

#![allow(unused)]
fn main() {
tracing_subscriber::fmt::init();
}

Or set RUST_LOG=debug to see detailed protocol traces.

Testing

Unit Tests

cargo test --workspace

Unit tests are embedded in source modules (mod tests { }).

Integration Tests

The workspace includes viva-fake-gige, an in-process GigE Vision camera simulator. All integration tests run automatically with cargo test – no external tools or hardware required.

# Run all tests (unit + integration)
cargo test --workspace

# Run integration tests specifically
cargo test -p viva-genicam --test fake_camera

# Run viva-service end-to-end tests (Zenoh bridge)
cargo test -p viva-service --test fake_camera_e2e

The fake camera supports:

  • GVCP discovery on UDP (loopback)
  • GenCP register read/write with an embedded GenApi XML
  • GVSP streaming with synthetic image frames and real timestamps
  • Chunk data (timestamp, exposure time) when ChunkModeActive is enabled
  • Timestamp features (GevTimestampTickFrequency, GevTimestampValue, TimestampLatch)

Demo

Run the self-contained demo to see the full workflow without hardware:

cargo run -p viva-genicam --example demo_fake_camera

This starts a fake camera, discovers it, reads/writes features, and streams frames – all on localhost with zero setup.

Manual / Interactive Testing

For interactive testing or E2E testing with genicam-studio, start the fake camera as a standalone server:

# Stays alive until Ctrl+C
cargo run -p viva-fake-gige
cargo run -p viva-fake-gige -- --width 512 --height 512 --fps 15

Then use viva-camctl or viva-service to interact with it. See the Testing without hardware tutorial for details.

Contributing

Contributions are welcome! Please open an issue or pull request on GitHub.

Development Setup

# Build the workspace
cargo build --workspace

# Run tests (includes fake camera integration tests)
cargo test --workspace

# Lint
cargo clippy --workspace --all-targets -- -D warnings
cargo fmt --all --check

Code Style

  • Follow rustfmt defaults
  • Keep clippy warnings clean
  • Add doc comments to all public items

FAQ

This page collects short answers to questions that come up often when using viva-genicam or bringing up a new camera.

If you are stuck, also check:

and the issues in the GitHub repository.


“Discovery finds no cameras. What do I check first?”

Run:

cargo run -p viva-camctl -- list

If it shows nothing:

  1. Physical link
    • Are the link LEDs lit on camera, NIC, and switch?
    • Try a different cable or port.
  2. IP addresses
    • Host NIC and camera must be on the same subnet (e.g. 192.168.0.x/24).
    • Avoid having two NICs on the same subnet; routing will get confused.
  3. Firewall
    • Allow UDP broadcast/unicast on the NIC used for cameras.
    • On Windows, make sure the binary is allowed on the relevant network profile (Private / Domain).
  4. Multiple NICs
    • Use –iface to force the interface:
cargo run -p viva-camctl -- list --iface 192.168.0.5

See also: Discovery tutorial and Networking.

“The vendor viewer works but viva-genicam doesn’t. Why?”

Common causes:

  • Different NIC / interface:
    • The vendor tool may be using a different NIC or IP selection strategy.
    • Compare which local IP it uses and pass that as –iface to viva-camctl.
  • Different stream destination:
    • The camera might be configured to stream to a specific IP/port.
    • Ensure viva-genicam uses the same host IP and port, or reset the camera configuration to defaults.
  • Different MTU / packet size / packet delay:
    • Vendor tools sometimes auto-tune these.
    • Try matching their settings using GenApi features (packet size, frame rate, inter-packet delay).

When in doubt:

  • Capture logs with RUST_LOG=debug and compare behaviour at the same frame rate and resolution.

See: Streamingand Networking.

“Does this work on Windows?”

Yes. Windows is a first-class target alongside Linux and macOS.

Notes:

  • Make sure the firewall allows discovery and streaming:
    • When Windows asks whether to allow the executable on Private/Public networks, allow it on the profile you use for the camera network.
  • Configure the NIC for the camera network with a static IPv4 address, separate from your office/internet NIC.
  • For high-throughput setups:
    • Consider enabling jumbo frames on the camera NIC.
    • Disable power-saving features that can introduce latency.

See: Networking for NIC configuration details.

“How do I set exposure, gain, pixel format, etc.?”

Use the GenApi features via viva-camctl or the viva-genicam crate.

Examples with viva-camctl:

# Read ExposureTime
cargo run -p viva-camctl -- \
  get --ip 192.168.0.10 --name ExposureTime

# Set ExposureTime to 5000 (units depend on camera, often microseconds)
cargo run -p viva-camctl -- \
  set --ip 192.168.0.10 --name ExposureTime --value 5000

# Set PixelFormat by name
cargo run -p viva-camctl -- \
  set --ip 192.168.0.10 --name PixelFormat --value Mono8

For more, see: Registers & features.

“What are selectors and why do my changes seem to disappear?”

Many cameras use selectors to multiplex multiple logical settings onto one feature. Example:

  • GainSelector = All, Red, Green, Blue, …
  • Gain = value for the currently selected channel.

If you set Gain without first setting GainSelector, you might be modifying a different “row” than you expect.

Typical sequence:

cargo run -p viva-camctl -- \
  set --ip 192.168.0.10 --name GainSelector --value Red

cargo run -p viva-camctl -- \
  set --ip 192.168.0.10 --name Gain --value 5.0

See: Registers & features and the selectors_demo example in the viva-genicam crate.

“Do I need to care about the GenApi XML?”

For most applications, no:

  • You can use features by name and let viva-genapi handle the mapping.

You should look at the XML when:

  • A feature behaves differently from the SFNC / vendor documentation.
  • You are debugging selector or SwissKnife behaviour.
  • You are contributing to viva-genapi or genapi-xml.

See: GenApi XML tutorialand the crate chapters for viva-genapi-xml and viva-genapi when they are filled in.

“How do I save frames and look at them?”

With viva-camctl:

  • Use stream with an option like –count / –output (exact flags depend on the CLI):
cargo run -p viva-camctl -- \
  stream --ip 192.168.0.10 --iface 192.168.0.5 \
  --count 100 --output ./frames

This typically saves a sequence of frames in a simple format (e.g. raw, PGM/PPM) that you can inspect with:

  • Image viewers.
  • Python + NumPy + OpenCV.
  • Your own Rust tools.

See: Streaming.

“How do I generate documentation?”

  • mdBook (this book):
    • From the repository root:
cargo install mdbook  # if not already installed
mdbook build book
- The rendered HTML will be under book/book/.
  • Rust API docs:
    • From the repository root:
#![allow(unused)]
fn main() {
cargo doc --workspace --all-features
}
- The rendered HTML will be under target/doc/.

Many users publish these via GitHub Pages or another static host; see the repository CI configuration for details.

“Where should I report bugs or ask questions?”

  • For bugs or feature requests, open an issue in the GitHub repository with:
    • A clear description of the problem.
    • Your OS, Rust version, and camera model.
    • A minimal reproduction if possible (CLI commands or small Rust snippet).
    • Relevant logs (e.g. RUST_LOG=debug output).
  • For questions that may be general (not specific to this project), link to:
    • The camera’s data sheet or GenICam XML snippet if relevant.
    • Any vendor tools you used to compare behaviour.

Good issues make it much easier to improve the crates for everyone.


API reference

The API reference is generated with cargo doc and published together with this book.

For crates in this workspace:

Glossary

TermDefinition
GenICamGeneric Interface for Cameras – an EMVA standard for camera control
GenApiGeneric API – the XML-based feature description layer of GenICam
GenCPGeneric Control Protocol – transport-agnostic register read/write
GVCPGigE Vision Control Protocol – UDP-based control channel
GVSPGigE Vision Streaming Protocol – UDP-based image data channel
SFNCStandard Feature Naming Convention – standard camera feature names
PFNCPixel Format Naming Convention – standard pixel format codes
CCPControl Channel Privilege – exclusive camera access token
GenTLGeneric Transport Layer – shared library interface for camera transport
NodeMapIn-memory representation of GenApi features parsed from XML
RegisterIoTrait abstracting register read/write over any transport

License

This project is licensed under the MIT License.

See the LICENSE file in the repository root for the full text.