Process Data from a PushBroom Device in a Model (Chunkwise Separate Thread)

This example will connect to the virtual pushbroom camera (using MiniCube.hdr and its references), measure a white & dark reference. It will then create a recording context (to normalize the HSI data), a set of buffer containers (to temporarily store the normalized data), and a processing context for the model instrument_function.fluxmdl (that will process the normalized data).

The data acquisition and the processing of the data will occur in separate threads. In order for both processing contexts not to conflict with one aonther, they must be created for separate processing queue sets.

This illustrates how the HSI data can be gathered and then processed in chunks (instead of the default line by line processing that fluxEngine performs).

For the first 4 cubes that are combined by the code, the spectral average of the 4 left-most pixels of the first 10 lines will be shown to the user.

C++

The source code of the example can be found in the file example_pushbroom_process_model_chunkwise_threads.cpp:

  1#if defined(_WIN32) && defined(_MSC_VER)
  2#include <windows.h>
  3#endif
  4
  5#include <iostream>
  6#include <iomanip>
  7#include <string>
  8#include <fstream>
  9#include <streambuf>
 10#include <algorithm>
 11#include <utility>
 12#include <mutex>
 13#include <condition_variable>
 14#include <thread>
 15#include <atomic>
 16
 17#include <cstddef>
 18
 19#include <fluxEngine/fluxEngine>
 20
 21#include "paths.h"
 22#include "helpers.h"
 23
 24/* Processing Queue Sets for the recording and the processing
 25 * contexts. Since both contexts may be executed simultaneously,
 26 * two different queue sets are required.
 27 */
 28static fluxEngine::ProcessingQueueSet g_pqsRecord;
 29static fluxEngine::ProcessingQueueSet g_pqsProcess;
 30
 31/* Recording and processing contexts:
 32 * 
 33 *  - The recording context gathers all HSI lines into buffer
 34 *    containers
 35 *
 36 *  - The processing context takes a full buffer container and
 37 *    processes it as a cube with a given model
 38 */
 39static fluxEngine::ProcessingContext g_ctxRecord;
 40static fluxEngine::ProcessingContext g_ctxProcess;
 41
 42// The camera (used by the recording thread)
 43static fluxEngine::InstrumentDevice* g_camera{};
 44
 45/* Global variables related to the exchange of buffers between
 46 * the recording thread and the main thread.
 47 */
 48
 49// The number of buffer containers to use (minimum 2)
 50static std::size_t const g_bufferCount = 2;
 51// The size of each buffer container in number of lines
 52static std::size_t const g_bufferSize = 32;
 53// The buffer containers (will be initialized later)
 54static fluxEngine::BufferContainer g_buffers[g_bufferCount];
 55// A mutex to lock the g_currentBuffer/g_currentBufferWriting
 56// variables
 57static std::mutex g_currentBufferMutex;
 58// A condition variable to notify when a buffer has been
 59// filled
 60static std::condition_variable g_currentBufferCV;
 61// The index of the current buffer that is currently being
 62// written to by the recording thread
 63static std::size_t g_currentBufferWriting = 0;
 64// The index of the last buffer that was filled by the
 65// recording thread
 66static std::size_t g_currentBuffer = 0;
 67
 68// A flag variable to indicate to the recording thread it
 69// should terminate
 70static std::atomic<bool> g_recordingThreadTerminateFlag{false};
 71// The recording thread
 72static std::thread g_recordingThread;
 73
 74// Helper function to extract the dimensions of a
 75// reference measurement for use with a manual HSI cube
 76// processing context
 77static inline std::vector<std::int64_t> dimensionsVector(fluxEngine::ProcessingContext::ReferenceMeasurement const& m)
 78{
 79    int const order = m.order();
 80    std::array<std::int64_t, 5> const dimensions = m.dimensions();
 81    std::vector<std::int64_t> result(dimensions.cbegin(), dimensions.cbegin() + order);
 82    result.insert(result.begin(), 1);
 83    return result;
 84}
 85
 86// Helper function to extract the strides of a
 87// reference measurement for use with a manual HSI cube
 88// processing context
 89static inline std::vector<std::int64_t> stridesVector(fluxEngine::ProcessingContext::ReferenceMeasurement const& m)
 90{
 91    int const order = m.order();
 92    std::array<std::int64_t, 5> const strides = m.strides();
 93    std::vector<std::int64_t> result(strides.cbegin(), strides.cbegin() + order);
 94    result.insert(result.begin(), result.front());
 95    return result;
 96}
 97
 98// Helper function to create a manual HSI cube processing
 99// context using a specific processing queue set
100static inline fluxEngine::ProcessingContext createCubeProcessingContext(fluxEngine::Model& model,
101                                                                        fluxEngine::ProcessingQueueSet& pqs,
102                                                                        fluxEngine::ProcessingContext::HSIRecordingResult& recordingResultInfo,
103                                                                        fluxEngine::DataType processingInputDataType,
104                                                                        std::int64_t maxHeight, std::int64_t width)
105{
106    fluxEngine::ContextInputDataInfo processingInputInfo;
107    processingInputInfo.inputValueType = recordingResultInfo.actualValueType;
108    processingInputInfo.calibrationInfo = &recordingResultInfo.calibrationInfo;
109    if (recordingResultInfo.whiteReference) {
110        processingInputInfo.referenceInfo.whiteReferenceDataType = recordingResultInfo.whiteReference.dataType();
111        processingInputInfo.referenceInfo.whiteReference = recordingResultInfo.whiteReference.data();
112        processingInputInfo.referenceInfo.whiteReferenceDimensions = dimensionsVector(recordingResultInfo.whiteReference);
113        processingInputInfo.referenceInfo.whiteReferenceStrides = stridesVector(recordingResultInfo.whiteReference);
114    }
115    if (recordingResultInfo.illuminationReference) {
116        processingInputInfo.referenceInfo.illuminationReferenceDataType = recordingResultInfo.illuminationReference.dataType();
117        processingInputInfo.referenceInfo.illuminationReference = recordingResultInfo.illuminationReference.data();
118        processingInputInfo.referenceInfo.illuminationReferenceDimensions = dimensionsVector(recordingResultInfo.illuminationReference);
119        processingInputInfo.referenceInfo.illuminationReferenceStrides = stridesVector(recordingResultInfo.illuminationReference);
120    }
121    if (recordingResultInfo.darkReference) {
122        processingInputInfo.referenceInfo.darkReferenceDataType = recordingResultInfo.darkReference.dataType();
123        processingInputInfo.referenceInfo.darkReference = recordingResultInfo.darkReference.data();
124        processingInputInfo.referenceInfo.darkReferenceDimensions = dimensionsVector(recordingResultInfo.darkReference);
125        processingInputInfo.referenceInfo.darkReferenceStrides = stridesVector(recordingResultInfo.darkReference);
126    }
127
128    return fluxEngine::ProcessingContext(model, pqs, fluxEngine::ProcessingContext::HSICube, fluxEngine::HSICube_StorageOrder::BIP,
129                                         processingInputDataType, maxHeight, -1, width, width, recordingResultInfo.wavelengths,
130                                         processingInputInfo);
131}
132
133// Terminate the recording thread
134static void terminateRecordingThread()
135{
136    g_recordingThreadTerminateFlag.store(true, std::memory_order_relaxed);
137    if (g_recordingThread.joinable())
138        g_recordingThread.join();
139}
140
141// The main function of the recording thread
142static void recordingThreadMain()
143{
144    while (!g_recordingThreadTerminateFlag.load(std::memory_order_relaxed)) {
145        try {
146            fluxEngine::BufferInfo buffer = g_camera->retrieveBuffer(std::chrono::milliseconds{100});
147            if (!buffer.ok)
148                continue;
149
150            g_ctxRecord.setSourceData(buffer);
151            g_ctxRecord.processNext();
152            {
153                std::unique_lock lock(g_currentBufferMutex);
154                if (g_buffers[g_currentBufferWriting].count() < g_bufferSize) {
155                    g_buffers[g_currentBufferWriting].addLastResult(g_ctxRecord);
156                    if (g_buffers[g_currentBufferWriting].count() >= g_bufferSize) {
157                        g_currentBuffer = g_currentBufferWriting;
158                        g_currentBufferWriting = (g_currentBufferWriting + 1) % g_bufferCount;
159                        g_currentBufferCV.notify_one();
160                    }
161                } else {
162                    std::cerr << "Warning: processing did not extract data from buffer fast enough, ignoring frame\n" << std::flush;
163                }
164            }
165
166            g_camera->returnBuffer(buffer.id);
167        } catch (std::exception& e) {
168            std::cerr << "Warning: exception during recording: " << e.what() << '\n' << std::flush;
169        } catch (...) {
170            std::cerr << "Warning: unknown exception during recording\n" << std::flush;
171        }
172    }
173}
174
175int main()
176{
177    fluxEngine::Handle handle;
178    try {
179        std::cout << "fluxEngine version: " << fluxEngine::versionString() << std::endl;
180        handle = fluxEngine::Handle(readFile(g_licenseFileName));
181        handle.setDriverBaseDirectory(g_driverDirectory);
182        handle.setDriverIsolationExecutable(g_driverIsolationExecutable);
183        g_pqsProcess = fluxEngine::ProcessingQueueSet(handle);
184        g_pqsProcess.createProcessingThreads(4);
185        g_pqsRecord  = fluxEngine::ProcessingQueueSet(handle);
186        g_pqsRecord.createProcessingThreads(2);
187
188        // Load virtual camera
189        fluxEngine::EnumerationResult enumeratedDevices = fluxEngine::enumerateDevices(handle, -1, std::chrono::seconds{1});
190        fluxEngine::EnumeratedDevice* virtualCameraDevice = nullptr;
191        for (auto const& device : enumeratedDevices.devices) {
192            if (device->driver->name == "VirtualHyperCamera") {
193                virtualCameraDevice = device.get();
194                break;
195            }
196        }
197
198        if (!virtualCameraDevice)
199            throw std::runtime_error("Could not find virtual camera driver");
200
201        fluxEngine::ConnectionSettings connectionSettings;
202        connectionSettings.driverName = virtualCameraDevice->driver->name;
203        connectionSettings.driverType = virtualCameraDevice->driver->type;
204        connectionSettings.id = virtualCameraDevice->id;
205        connectionSettings.timeout = std::chrono::seconds{60};
206        connectionSettings.connectionParameters["Cube"] = encodeFileNameForConnectionParameter(g_cubeFileName);
207        connectionSettings.connectionParameters["WhiteReferenceCube"] = encodeFileNameForConnectionParameter(g_whiteCubeFileName);
208        connectionSettings.connectionParameters["DarkReferenceCube"] = encodeFileNameForConnectionParameter(g_darkCubeFileName);
209
210        std::cout << "Attempting to connect to device...\n" << std::flush;
211        for (auto const& parameter : connectionSettings.connectionParameters)
212            std::cout << "  - " << parameter.first << ": " << parameter.second << "\n" << std::flush;
213        fluxEngine::DeviceGroup deviceGroup = fluxEngine::connectDeviceGroup(handle, connectionSettings);
214        std::cout << "Connected.\n" << std::flush;
215        g_camera = dynamic_cast<fluxEngine::InstrumentDevice*>(deviceGroup.primaryDevice());
216        if (!g_camera) {
217            deviceGroup.disconnect(std::chrono::seconds{5});
218            throw std::runtime_error("The device is not an instrument device");
219        }
220
221        g_camera->setupInternalBuffers(5);
222
223        /* Load model
224         *
225         * This should be done after connecting with the camera, in
226         * case the license is tied to a camera serial number. (In
227         * case the license is tied to a dongle or a mainboard id,
228         * this may be done beforehand.)
229         */
230        fluxEngine::Model model = fluxEngine::Model(handle, fluxEngine::Model::FromFile, g_modelFileName);
231
232        /* NOTE:
233         * For real devices a this point the user should probably be
234         * asked to insert a white reference underneath the camera.
235         *
236         * For the virtual device this is not required.
237         */
238
239        fluxEngine::InstrumentDevice::AcquisitionParameters acqParams;
240        std::cout << "Measuring white reference:\n" << std::flush;
241        fluxEngine::BufferContainer whiteReference = fluxEngine::createRingBufferContainer(g_camera, 10);
242        acqParams.referenceName = "WhiteReference";
243        g_camera->startAcquisition(acqParams);
244        for (int i = 0; i < 10; ++i) {
245            fluxEngine::BufferInfo buffer = g_camera->retrieveBuffer(std::chrono::seconds{1});
246            if (buffer.ok) {
247                whiteReference.add(buffer);
248                g_camera->returnBuffer(buffer.id);
249            }
250        }
251        g_camera->stopAcquisition();
252        std::cout << "Done.\n" << std::flush;
253
254        /* NOTE:
255         * For real devices a this point the user should probably be
256         * asked to obscure the optics in front of the camera in order
257         * for a proper dark reference to be measured.
258         *
259         * For the virtual device this is not required.
260         *
261         * Some cameras do have an internal shutter, where manual user
262         * intervention is also not required here.
263         */
264
265        std::cout << "Measuring dark reference:\n" << std::flush;
266        fluxEngine::BufferContainer darkReference = fluxEngine::createBufferContainer(g_camera, 10);
267        acqParams.referenceName = "DarkReference";
268        g_camera->startAcquisition(acqParams);
269        for (int i = 0; i < 10; ++i) {
270            fluxEngine::BufferInfo buffer = g_camera->retrieveBuffer(std::chrono::seconds{1});
271            if (buffer.ok) {
272                darkReference.add(buffer);
273                g_camera->returnBuffer(buffer.id);
274            }
275        }
276        g_camera->stopAcquisition();
277        std::cout << "Done.\n" << std::flush;
278
279        /* Create recording context. This will be used to store
280         * the recorded data in the.
281         */
282        fluxEngine::ProcessingContext::InstrumentParameters instrumentParameters;
283        instrumentParameters.whiteReference = &whiteReference;
284        instrumentParameters.darkReference = &darkReference;
285        fluxEngine::ProcessingContext::HSIRecordingResult contextAndInfo = fluxEngine::ProcessingContext::createInstrumentHSIRecordingContext(g_camera, g_pqsRecord, fluxEngine::ValueType::Intensity, instrumentParameters, {});
286        g_ctxRecord = std::move(contextAndInfo.context);
287
288        // Create buffers to store HSI cube data in
289        for (std::size_t i = 0; i < g_bufferCount; ++i)
290            g_buffers[i] = fluxEngine::createBufferContainer(g_ctxRecord, g_bufferSize);
291
292        /* Create the context that will process the cube.
293         */
294        fluxEngine::DataType processingInputDataType = g_ctxRecord.outputSinkTensorStructure(0).dataType;
295        g_ctxProcess = createCubeProcessingContext(model, g_pqsProcess, contextAndInfo, processingInputDataType, static_cast<std::int64_t>(g_bufferSize), g_buffers[0].dimensions()[1]);
296
297        std::vector<char> data;
298        int const sinkIndex = g_ctxProcess.findOutputSink(/* outputId = */ 0);
299
300        /* NOTE:
301         * For real devices a this point the user should probably be
302         * asked to position the object to measure underneath the
303         * camera and start the motion of the motion control device
304         * they have.
305         *
306         * For the virtual device this is not required.
307         */
308
309        std::cout << "Starting acquisition:\n" << std::flush;
310        acqParams.referenceName = {};
311        g_camera->startAcquisition(acqParams);
312        std::cout << "Done.\n" << std::flush;
313
314        {
315            // Ensure that all buffers are cleared before we begin
316            // with the thread
317            std::unique_lock lock(g_currentBufferMutex);
318            g_currentBuffer = 0;
319            g_currentBufferWriting = 0;
320            for (std::size_t i = 0; i < g_bufferCount; ++i)
321                g_buffers[i].clear();
322        }
323        // Start the recording thread
324        g_recordingThreadTerminateFlag.store(false, std::memory_order_relaxed);
325        g_recordingThread = std::thread(&recordingThreadMain);
326
327        try {
328            // Wait for 4 cubes and process each of them
329            std::size_t const nCubesToProcess = 4;
330
331            std::vector<char> data;
332
333            for (std::size_t i = 0; i < nCubesToProcess; ++i) {
334                std::int64_t height = 0, width = 0;
335
336                {
337                    // Wait for the current buffer to have filled up
338                    std::unique_lock lock(g_currentBufferMutex);
339                    g_currentBufferCV.wait(lock, [] () -> bool {
340                        return g_buffers[g_currentBuffer].count() == static_cast<std::int64_t>(g_bufferSize);
341                    });
342
343                    fluxEngine::BufferContainer& buffer = g_buffers[g_currentBuffer];
344                    data.resize(buffer.bytesTotal());
345                    buffer.copyData(data.data(), data.size());
346                    height = buffer.dimensions()[0];
347                    width = buffer.dimensions()[1];
348                    buffer.clear();
349                }
350
351                std::cout << "Got new cube to process...\n" << std::flush;
352                g_ctxProcess.setSourceData(fluxEngine::ProcessingContext::HSICube, height, width, data.data());
353                g_ctxProcess.processNext();
354
355                fluxEngine::TensorData view{g_ctxProcess.outputSinkData(sinkIndex)};
356                if (view.order() == 3
357                    && view.dimension(2) == 1
358                    && view.dataType() == fluxEngine::DataType::Float64) {
359                    for (int y = 0; y < 10; ++y) {
360                        for (int x = 0; x < 4; ++x)
361                            std::cout << "Spectral Average @(" << x << ", " << y << ") = " << view.at<double>(y, x, 0) << '\n';
362                    }
363                    std::cout << std::flush;
364                }
365            }
366
367        } catch (...) {
368            terminateRecordingThread();
369            throw;
370        }
371
372        terminateRecordingThread();
373
374        std::cout << "Stopping acquisition:\n" << std::flush;
375        g_camera->stopAcquisition();
376        std::cout << "Done.\n" << std::flush;
377    } catch (std::exception& e) {
378        std::cerr << "Error: " << e.what() << std::endl;
379        return 1;
380    } catch (...) {
381        std::cerr << "Unknown error." << std::endl;
382        return 1;
383    }
384
385    return 0;
386}

This source file will compile to the executable ExamplePushBroomProcessModelChunkwiseThreads.

The following classes and methods are among those used in this example:

.NET

The source code of the example can be found in the file ExamplePushBroomProcessModelChunkwiseThreads\Program.cs.

  1using System;
  2
  3namespace ExamplePushBroomProcessModelChunkwiseThreads
  4{
  5    class Program
  6    {
  7        /* Processing Queue Sets for the recording and the processing
  8         * contexts. Since both contexts may be executed simultaneously,
  9         * two different queue sets are required.
 10         */
 11        static LuxFlux.fluxEngineNET.ProcessingQueueSet g_pqsRecord;
 12        static LuxFlux.fluxEngineNET.ProcessingQueueSet g_pqsProcess;
 13
 14        /* Recording and processing contexts:
 15         * 
 16         *  - The recording context gathers all HSI lines into buffer
 17         *    containers
 18         *
 19         *  - The processing context takes a full buffer container and
 20         *    processes it as a cube with a given model
 21         */
 22        static LuxFlux.fluxEngineNET.ProcessingContext g_ctxRecord;
 23        static LuxFlux.fluxEngineNET.ProcessingContext g_ctxProcess;
 24
 25        // The camera (used by the recording thread)
 26        static LuxFlux.fluxEngineNET.InstrumentDevice g_camera;
 27
 28        /* Global variables related to the exchange of buffers between
 29         * the recording thread and the main thread.
 30         */
 31
 32        // The number of buffer containers to use (minimum 2)
 33        const int g_bufferCount = 2;
 34        // The size of each buffer container in number of lines
 35        const int g_bufferSize = 32;
 36        // The buffer containers (will be initialized later)
 37        static LuxFlux.fluxEngineNET.BufferContainer[] g_buffers = new LuxFlux.fluxEngineNET.BufferContainer[g_bufferCount];
 38        // An object to use for synchronization
 39        static object g_currentBufferSyncObject = new object();
 40        // The index of the current buffer that is currently being
 41        // written to by the recording thread
 42        static int g_currentBufferWriting = 0;
 43        // The index of the last buffer that was filled by the
 44        // recording thread
 45        static int g_currentBuffer = 0;
 46
 47        // A flag variable to indicate to the recording thread it
 48        // should terminate
 49        static bool g_recordingThreadTerminateFlag = false;
 50        // The recording thread
 51        static System.Threading.Thread g_recordingThread;
 52
 53        static LuxFlux.fluxEngineNET.ProcessingContext CreateCubeProcessingContext(LuxFlux.fluxEngineNET.Model model,
 54            LuxFlux.fluxEngineNET.ProcessingQueueSet pqs, LuxFlux.fluxEngineNET.HSIRecordingResult recordingResultInfo,
 55            LuxFlux.fluxEngineNET.DataType processingInputType, Int64 maxHeight, Int64 width)
 56        {
 57            var processingInputInfo = new LuxFlux.fluxEngineNET.ProcessingContext.ExplicitInputConfiguration();
 58            processingInputInfo.InputValueType = recordingResultInfo.ActualValueType;
 59            processingInputInfo.CalibrationInfo = recordingResultInfo.CalibrationInfo;
 60            var referenceInfo = new LuxFlux.fluxEngineNET.ProcessingContext.MemoryReferenceInput();
 61            if (recordingResultInfo.WhiteReference != null)
 62            {
 63                referenceInfo.WhiteReference = recordingResultInfo.WhiteReference.TensorView.WithInsertedUnityDimension(0);
 64            }
 65            if (recordingResultInfo.IlluminationReference != null)
 66            {
 67                referenceInfo.IlluminationReference = recordingResultInfo.IlluminationReference.TensorView.WithInsertedUnityDimension(0);
 68            }
 69            if (recordingResultInfo.DarkReference != null)
 70            {
 71                referenceInfo.DarkReference = recordingResultInfo.DarkReference.TensorView.WithInsertedUnityDimension(0);
 72            }
 73            processingInputInfo.ReferenceInput = referenceInfo;
 74            processingInputInfo.InputIsIlluminationCorrected = false;
 75
 76            return LuxFlux.fluxEngineNET.ProcessingContext.CreateForHSICube(model, pqs, LuxFlux.fluxEngineNET.HSICube_StorageOrder.BIP,
 77                processingInputType, maxHeight, -1, width, width, recordingResultInfo.Wavelengths, processingInputInfo);
 78        }
 79
 80
 81        static void TerminateRecordingThread()
 82        {
 83            g_recordingThreadTerminateFlag = true;
 84            g_recordingThread.Join();
 85        }
 86
 87        static void RecordingThreadMain()
 88        {
 89            while (!g_recordingThreadTerminateFlag)
 90            {
 91                try
 92                {
 93                    var buffer = g_camera.RetrieveBuffer(TimeSpan.FromMilliseconds(100));
 94                    if (buffer == null)
 95                        continue;
 96
 97                    try
 98                    {
 99                        g_ctxRecord.SetSourceData(buffer);
100                        g_ctxRecord.ProcessNext();
101                        lock (g_currentBufferSyncObject)
102                        {
103                            if (g_buffers[g_currentBufferWriting].Count < g_bufferSize)
104                            {
105                                g_buffers[g_currentBufferWriting].AddLastResult(g_ctxRecord);
106                                if (g_buffers[g_currentBufferWriting].Count >= g_bufferSize)
107                                {
108                                    g_currentBuffer = g_currentBufferWriting;
109                                    g_currentBufferWriting = (g_currentBufferWriting + 1) % g_bufferCount;
110                                    System.Threading.Monitor.Pulse(g_currentBufferSyncObject);
111                                }
112                            }
113                            else
114                            {
115                                Console.Error.WriteLine("Warning: processing did not extract data from buffer fast enough, ignoring frame");
116                            }
117                        }
118                    }
119                    finally
120                    {
121                        g_camera.ReturnBuffer(buffer);
122                    }
123                }
124                catch (Exception e)
125                {
126                    Console.Error.WriteLine($"Warning: exception during recording: {e.Message}");
127                }
128            }
129        }
130
131        static void Main(string[] args)
132        {
133            Console.WriteLine("fluxEngine version: " + LuxFlux.fluxEngineNET.Version.String);
134            var handle = new LuxFlux.fluxEngineNET.Handle(ExampleHelpers.IO.ReadLicenseFile());
135            handle.SetDriverBaseDirectory(ExampleHelpers.Paths.DriverDirectory);
136
137            g_pqsProcess = new LuxFlux.fluxEngineNET.ProcessingQueueSet(handle);
138            g_pqsProcess.CreateProcessingThreads(4);
139            g_pqsRecord = new LuxFlux.fluxEngineNET.ProcessingQueueSet(handle);
140            g_pqsRecord.CreateProcessingThreads(2);
141
142            // Load virtual camera
143            var enumeratedDevices = LuxFlux.fluxEngineNET.DeviceEnumeration.EnumerateDevices(handle, null, TimeSpan.FromSeconds(1));
144            LuxFlux.fluxEngineNET.EnumeratedDevice virtualCameraDevice = null;
145            foreach (var device in enumeratedDevices.Devices)
146            {
147                if (device.Driver.Name == "VirtualHyperCamera")
148                {
149                    virtualCameraDevice = device;
150                    break;
151                }
152            }
153
154            if (virtualCameraDevice == null)
155                throw new Exception("Could not find virtual camera driver");
156
157            var connectionSettings = new LuxFlux.fluxEngineNET.ConnectionSettings();
158            connectionSettings.DriverName = virtualCameraDevice.Driver.Name;
159            connectionSettings.DriverType = virtualCameraDevice.Driver.Type;
160            connectionSettings.Id = virtualCameraDevice.Id;
161            connectionSettings.Timeout = TimeSpan.FromSeconds(60);
162            connectionSettings.ConnectionParameters = new System.Collections.Generic.Dictionary<string, string>();
163            connectionSettings.ConnectionParameters["Cube"] = ExampleHelpers.Paths.ExampleDataFileName("MiniCube.hdr");
164            connectionSettings.ConnectionParameters["WhiteReferenceCube"] = ExampleHelpers.Paths.ExampleDataFileName("MiniCube_White.hdr");
165            connectionSettings.ConnectionParameters["DarkReferenceCube"] = ExampleHelpers.Paths.ExampleDataFileName("MiniCube_Dark.hdr");
166
167            Console.WriteLine("Attempting to connect to device...");
168            var deviceGroup = LuxFlux.fluxEngineNET.DeviceGroup.Connect(handle, connectionSettings);
169            Console.WriteLine("Connected.");
170            if (!(deviceGroup.PrimaryDevice is LuxFlux.fluxEngineNET.InstrumentDevice))
171            {
172                deviceGroup.Disconnect(TimeSpan.FromSeconds(5));
173                throw new Exception("The device is not an instrument device.");
174            }
175            g_camera = (LuxFlux.fluxEngineNET.InstrumentDevice)deviceGroup.PrimaryDevice;
176
177            g_camera.SetupInternalBuffers(5);
178
179            /* Load model
180             *
181             * This should be done after connecting with the camera, in
182             * case the license is tied to a camera serial number. (In
183             * case the license is tied to a dongle or a mainboard id,
184             * this may be done beforehand.)
185             */
186            var model = LuxFlux.fluxEngineNET.Model.LoadFromFile(handle, ExampleHelpers.Paths.ExampleDataFileName("instrument_function.fluxmdl"));
187
188            /* NOTE:
189             * For real devices a this point the user should probably be
190             * asked to insert a white reference underneath the camera.
191             *
192             * For the virtual device this is not required.
193             */
194
195            var acqParams = new LuxFlux.fluxEngineNET.InstrumentDevice.AcquisitionParameters();
196            Console.WriteLine("Measuring white reference:");
197            var whiteReference = LuxFlux.fluxEngineNET.Util.CreateRingBufferContainer(g_camera, 10);
198            acqParams.ReferenceName = "WhiteReference";
199            g_camera.StartAcquisition(acqParams);
200            for (int i = 0; i < 10; ++i)
201            {
202                var buffer = g_camera.RetrieveBuffer(TimeSpan.FromSeconds(1));
203                if (buffer != null)
204                {
205                    try
206                    {
207                        whiteReference.Add(buffer);
208                    }
209                    finally
210                    {
211                        g_camera.ReturnBuffer(buffer);
212                    }
213                }
214            }
215            g_camera.StopAcquisition();
216            Console.WriteLine("Done.");
217
218            /* NOTE:
219             * For real devices a this point the user should probably be
220             * asked to obscure the optics in front of the camera in order
221             * for a proper dark reference to be measured.
222             *
223             * For the virtual device this is not required.
224             *
225             * Some cameras do have an internal shutter, where manual user
226             * intervention is also not required here.
227             */
228
229            Console.WriteLine("Measuring dark reference:");
230            var darkReference = LuxFlux.fluxEngineNET.Util.CreateRingBufferContainer(g_camera, 10);
231            acqParams.ReferenceName = "DarkReference";
232            g_camera.StartAcquisition(acqParams);
233            for (int i = 0; i < 10; ++i)
234            {
235                var buffer = g_camera.RetrieveBuffer(TimeSpan.FromSeconds(1));
236                if (buffer != null)
237                {
238                    try
239                    {
240                        darkReference.Add(buffer);
241                    }
242                    finally
243                    {
244                        g_camera.ReturnBuffer(buffer);
245                    }
246                }
247            }
248            g_camera.StopAcquisition();
249            Console.WriteLine("Done.");
250
251            /* Create recording context. This will be used to store
252             * the recorded data in the.
253             */
254            var instrumentReferences = new LuxFlux.fluxEngineNET.ProcessingContext.BufferReferenceInput();
255            instrumentReferences.WhiteReference = whiteReference;
256            instrumentReferences.DarkReference = darkReference;
257            var instrumentParameters = new LuxFlux.fluxEngineNET.ProcessingContext.InstrumentParameters();
258            instrumentParameters.ReferenceInput = instrumentReferences;
259            var contextAndInfo = LuxFlux.fluxEngineNET.ProcessingContext.CreateForInstrumentHSIRecording(g_camera, g_pqsRecord, LuxFlux.fluxEngineNET.ValueType.Intensity, instrumentParameters);
260            g_ctxRecord = contextAndInfo.Context;
261            // Create buffers to store HSI cube data in
262            for (int i = 0; i < g_bufferCount; ++i)
263                g_buffers[i] = LuxFlux.fluxEngineNET.Util.CreateBufferContainer(g_ctxRecord, g_bufferSize);
264
265            // Create the context that will process the cube
266            var processingInputDataType = g_ctxRecord.OutputSinkInfos[0].TensorStructure.DataType;
267            g_ctxProcess = CreateCubeProcessingContext(model, g_pqsProcess, contextAndInfo, processingInputDataType, g_bufferSize, g_buffers[0].Dimensions[1]);
268            int sinkIndex = g_ctxProcess.OutputSinkInfoById(/* outputId = */ 0).Index;
269
270            /* NOTE:
271             * For real devices a this point the user should probably be
272             * asked to position the object to measure underneath the
273             * camera and start the motion of the motion control device
274             * they have.
275             *
276             * For the virtual device this is not required.
277             */
278
279            Console.WriteLine("Starting acquisition:");
280            acqParams.ReferenceName = null;
281            g_camera.StartAcquisition(acqParams);
282            Console.WriteLine("Done.");
283
284            lock (g_currentBufferSyncObject)
285            {
286                g_currentBuffer = 0;
287                g_currentBufferWriting = 0;
288                for (int i = 0; i < g_bufferCount; ++i)
289                    g_buffers[i].Clear();
290            }
291
292            g_recordingThreadTerminateFlag = false;
293            g_recordingThread = new System.Threading.Thread(RecordingThreadMain);
294            g_recordingThread.Start();
295
296            try
297            {
298                // Wait for 4 cubes and process each of them
299                const int nCubesToProcess = 4;
300                LuxFlux.fluxEngineNET.GenericTensor data = null;
301
302                for (int i = 0; i < nCubesToProcess; ++i)
303                {
304                    lock(g_currentBufferSyncObject)
305                    {
306                        while (g_buffers[g_currentBuffer].Count < g_bufferSize)
307                            System.Threading.Monitor.Wait(g_currentBufferSyncObject);
308
309                        data = g_buffers[g_currentBuffer].TensorCopy();
310
311                        g_buffers[g_currentBuffer].Clear();
312                    }
313
314                    Console.WriteLine("Got new cube to process...");
315                    g_ctxProcess.SetSourceData(new LuxFlux.fluxEngineNET.ReadOnlyTensorView(data));
316                    g_ctxProcess.ProcessNext();
317
318                    // Has the correct structure
319                    var resultData = g_ctxProcess.OutputSinkData(sinkIndex).AsTensor;
320                    if (resultData.Order == 3 && resultData.Dimensions[2] == 1 && resultData.DataType == LuxFlux.fluxEngineNET.DataType.Float64)
321                    {
322                        for (int y = 0; y < 10; ++y)
323                        {
324                            for (int x = 0; x < 4; ++x)
325                            {
326                                Console.WriteLine($"Spectral Average @({x}, {y}) = {resultData.Value<double>(y, x, 0)}");
327                            }
328                        }
329                    }
330                }
331
332            }
333            finally
334            {
335                TerminateRecordingThread();
336            }
337
338            Console.WriteLine("Stopping acquisition:");
339            g_camera.StopAcquisition();
340            Console.WriteLine("Done.");
341
342            Console.WriteLine("Disconnecting from device...");
343            deviceGroup.Disconnect(TimeSpan.FromSeconds(5));
344            Console.WriteLine("Done.");
345            g_ctxProcess.Dispose();
346            g_ctxRecord.Dispose();
347            g_pqsProcess.Dispose();
348            g_pqsRecord.Dispose();
349            model.Dispose();
350            handle.Dispose();
351        }
352    }
353}

The following classes and methods are among those used in this example:

Python

There is no Python version of this example.

Expected Output

The output should look like the following:

fluxEngine version: [...]
Attempting to connect to device...
- DarkReferenceCube: examples/data/MiniCube_Dark.hdr
- Cube: examples/data/MiniCube.hdr
- WhiteReferenceCube: examples/data/MiniCube_White.hdr
Connected.
Measuring white reference:
Done.
Measuring dark reference:
Done.
Starting acquisition:
Done.
Got new cube to process...
Spectral Average @(0, 0) = 0.652092
Spectral Average @(1, 0) = 0.641689
Spectral Average @(2, 0) = 0.633774
Spectral Average @(3, 0) = 0.636609
Spectral Average @(0, 1) = 0.664377
Spectral Average @(1, 1) = 0.656757
Spectral Average @(2, 1) = 0.648442
Spectral Average @(3, 1) = 0.652293
[...]
Spectral Average @(1, 9) = 0.68223
Spectral Average @(2, 9) = 0.678032
Spectral Average @(3, 9) = 0.678329
Got new cube to process...
Spectral Average @(0, 0) = 0.687697
Spectral Average @(1, 0) = 0.678115
Spectral Average @(2, 0) = 0.675959
[...]
Spectral Average @(2, 8) = 0.675959
Spectral Average @(3, 8) = 0.674892
Spectral Average @(0, 9) = 0.691204
Spectral Average @(1, 9) = 0.68223
Spectral Average @(2, 9) = 0.678032
Spectral Average @(3, 9) = 0.678329
Stopping acquisition:
Done.