Device Connection
In the Python API path to the driver directory must be specified. This
may be done via the
setDriverBaseDirectory()
method of the Handle
object:
handle.setDriverBaseDirectory(r'C:\Path\To\DriverBase')
In that example an instrument driver would be found in the path
C:\Path\To\DriverBase\instrument\SomeDriver\SomeDriver.dll
.
Enumerating Devices
Before device connection is possible the user must enumerate devices
and drivers. This can be done with the
enumerateDevices()
function in the fluxEngine module.
The user may select the driver type to enumerate. There are currently three options:
Instrument devices: cameras, spectrometers, and such. Specify
fluxEngine.DriverType.Instrument
as the device type to enumerateLight control devices: specify
fluxEngine.DriverType.LightControl
as the device type to enumerateAll support devices types: in that case the user should specify
-1
to find devices of all driver types.
The user must also specify a timeout for the enumeration process. The enumeration will take exactly that long and then return all devices and drivers that could be found.
Note
For cameras with a GigE Vision interface a timeout larger than three seconds (recommended: four seconds) is required, as the standard specifies that as the timeout for device discovery, and the enumeration process does need to load the driver, so using exactly three seconds is likely not enough.
At the moment the enumeration process will always wait until the timeout expires before returning all devices that were found in that time.
Error Reporting
The enumeration process may also generate errors and warnings. These will be reported alongside the devices and drivers that were found.
Example
The following example will enumerate drivers and devices:
result = fluxEngine.enumerateDevices(h, -1, 1000)
for driver in result.drivers:
print("Found driver {}, type {}, description and version: {} {}".
format(driver.name, driver.type,
driver.description, driver.version))
for device in result.devices:
print("Found device {}, driver: {}/{}".
format(device.displayName, device.driver.name,
device.driver.type))
for parameter in device.parameterInfo.parameters():
print(" - parameter {} of type {}".
format(parameter.name, parameter.type))
for warning in result.warnings:
print("Warning: {}".format(warning.message))
for error in result.errors:
print("Error: {}".format(error.message))
Device Connection
To connect to a device it has to be enumerated first. It may then be identified uniquely via the driver name, driver type and device id. The driver name and type will be stable (for a given driver), but the device id must always be obtained via enumeration, as that may be subject to change upon system reboot or the device being plugged out and plugged in again.
In addition some drivers may require some parameters during connection, for example a calibration file. Which parameters are available may be queried during enumeration (see the enumeration example), and will depend on the specific driver. Some drivers may not require any parameters at all.
The user must create a
ConnectionSettings
object that contains the device identifier as well as any parameter
values.
The following example would use the first device of the enumeration:
driver = result.devices[0].driver
device = result.devices[0]
settings = fluxEngine.ConnectionSettings(driver.name, driver.type,
device.id)
In real code the user should look at the devices during the enumeration process instead of just arbitrarily choosing the first device the enumeration process returned.
To demonstrate connection parameters, assuming that the chosen driver
is the virtual HSI PushBroom imager (VirtualHyperCamera
driver
name), the following example shows how to properly specify the cubes
required for the virtual camera to work:
settings.connectionParameters['Cube'] = r'C:\cube.hdr'
settings.connectionParameters['WhiteReferenceCube'] = r'C:\cube_White.hdr'
settings.connectionParameters['DarkReferenceCube'] = r'C:\cube_Dark.hdr'
In addition a timeout may be set. Note that a timeout of less than
10s
is typically unrealistic (as many devices will take longer
to connect), and typically 60s
is not an unreasonable timeout to
connect to the device. (During the initial connection process the
fluxEngine drivers will also read out metadata from the device, which
is what typically takes the most amount of time during the actual
connection process.)
settings
To perform the actual connection one must create a new
DeviceGroup
object. This is due to
the fact that connecting to some devices may result in a connection to
the device itself, with some functions of the device provided in form
of subdevices.
The following code demonstrates how to connect to a device:
deviceGroup = fluxEngine.DeviceGroup(h, settings)
camera = deviceGroup.primaryDevice()
In most cases the user will want to talk to the primary device of the device group only.
Parametrization
Most devices can be controlled via parameters. The available parameters
may be queried via the
parameterList()
method.
There are three parameter lists for different purposes:
Device parameters (
fluxEngine.Device.ParameterListType.Parameter
) that control the deviceMeta information parameters (
fluxEngine.Device.ParameterListType.MetaInfo
) that provide additional information about a device (these are read-only), such as the firmware version of the deviceStatus information parameters (
fluxEngine.Device.ParameterListType.Status
) that provide status information about the device (these are read-only), such as temperature sensor values
The following example code shows how to obtain a list of parameters of the device:
type = fluxEngine.Device.ParameterListType.Parameter
deviceSettings = camera.parameterList(type)
for parameter in deviceSettings.parameters():
print(" - parameter {} of type {}".
format(parameter.name, parameter.type))
Parameters may be read and changed via the
getParameter()
and setParameter()
methods. The following example shows how to change the exposure
time of a camera:
print("ExposureTime = {}".
format(camera.getParameter('ExposureTime')))
# the unit depends on the device
camera.setParameter('ExposureTime', 3500)
print("ExposureTime = {}".
format(camera.getParameter('ExposureTime')))
Instrument Devices
This section will deal with instrument devices specifically.
Acquisition Setup
Before data may be acquired from an instrument device the shared
memory segment between the driver process and fluxEngine (see
Instrument Buffers and Shared Memory for details) must be set up properly. This may be
done via the
setupInternalBuffers()
method.
It takes a single parameter: the maximum number of buffers a user may have in use during a single acquisition. The user may choose to use less buffers during an acquisition, but not more.
The more buffers are specified here, the larger the shared memory segment, and the more RAM will be used.
For live processing of data with an immediate response, the lowest
allowed number, 5
is typically recommended, to reduce latency, at
the risk of dropping frames if processing is not fast enough. In order
to record data a larger number (even up to e.g. 100
) is a better
choice. If both recordings and live processing are to be done during
the same connection with the device, the user should specify the larger
number here, and choose a smaller number for live processing when
starting acquisition for that.
Acquiring a buffer
The following example shows how to acquire a single buffer from the instrument (i.e. a frame when talking to a camera) and print its contents:
p = fluxEngine.InstrumentDevice.AcquisitionParameters()
p.bufferCount = 5
camera.startAcquisition(p)
buffer = camera.retrieveBuffer(1000)
if buffer is not None:
print(buffer.getData())
camera.returnBuffer(buffer)
camera.stopAcquisition()
The
startAcquisition()
command begins acquisition on the device. The user may then use
retrieveBuffer()
to retrieve a buffer from the queue, do something with that buffer
(in this case call
getData()
) and then
return the buffer back to the device via the
returnBuffer()
method. The
stopAcquisition()
method will then stop the acquisition.
The raw data in a buffer is often not quite as useful when talking to HSI cameras, as it is not standardized, and potentially some corrections have not yet been performed. See below for an example of how to record data in a standardized format.
Recording a white reference
With HSI cameras and spectrometers it is of vital importance to have
a proper white and dark reference. To reduce the effect of detector
noise, it is useful to average multiple buffers. To make recording
multiple buffers for later use in a white reference easier, there is
a class called BufferContainer
that allows the user to gather multiple buffers in a single container
and use them later on to initialize a processing context for either
recording data or processing it.
Some devices may operate in a slightly different mode when measuring a
reference. For example, cameras with a shutter may choose to close it
during the measurement of a dark reference. To tell fluxEngine that the
next measurement will be a reference measurement, the user may specify
this via the
referenceName
attribute of the
AcquisitionParameters
object:
p = fluxEngine.InstrumentDevice.AcquisitionParameters()
p.referenceName = "WhiteReference"
Currently only "WhiteReference"
and "DarkReference"
are
understood. (Also, None
is allowed to indicate that no reference
is being measured.)
The following code will create a buffer container that holds 10 white reference buffers:
whiteBuffer = fluxEngine.BufferContainer(camera, 10)
The instrument device (here camera
) must be specified to give
the information about the structure of the buffers.
camera.startAcquisition(p)
for i in range(10):
buffer = camera.retrieveBuffer(1000)
if buffer is not None:
whiteBuffer.add(buffer)
camera.returnBuffer(buffer)
camera.stopAcquisition()
Note
The buffer count specified in the acquisition parameters or during SHM setup only indicates the total amount of buffers that are available at any given time. After starting acquisition the instrument will typically return an infinite amount of buffers until acquisition is stopped again. As long as each buffer is returned by the user, the number of buffers specified in the acquisition parameters may be quite small (lower latency), while the total number of buffers processed may be huge. In addition, a buffer container will record the data of a buffer and not the buffer itself. For this reason the buffer container in this example can hold the data of 10 buffers, while the queue will only see up to 5 buffers at the same time.
Recording data from an instrument
To record data from an instrument in a standardized manner, a recording
processing context must be created. This may be done via the
constructor of the
ProcessingContext
class.
Recording contexts are currently only available for HSI PushBroom
cameras in fluxEngine.
The user must first specify instrument parameters. These will indicate
whether the user has previously measured a white and/or dark reference.
If the user has not measured a white reference, they must still create
an object of type
InstrumentParameters
,
but may leave the
whiteReference
attribute as-is, or explicitly set it to None
.
Assuming both a white and dark reference have been measured according to the last sub-section, the following code will create the correct instrument parameters using the buffer containers:
instrumentParameters = fluxEngine.InstrumentParameters()
instrumentParameters.whiteReference = whiteBuffer
instrumentParameters.darkReference = darkBuffer
To create a recording context one needs to set the context type to
fluxEngine.ProcessingContext.InstrumentHSIRecording
. The
user must also choose the value type of the recording. This will either
be ValueType.Intensity
or ValueType.Reflectance
.
The latter is typically only available if a white reference is present.
contextType = fluxEngine.ProcessingContext.InstrumentHSIRecording
model = None
valueType = fluxEngine.ValueType.Reflectance
ip = instrumentParameters
ctx = fluxEngine.ProcessingContext(model, contextType,
device=camera,
instrumentParameters=ip,
valueType=valueType)
When creating a recording context additional information will be
returned to the user. In the Python API this is accessible via the
hsiRecordingResultInfo()
method:
recordingInfo = ctx.hsiRecordingResultInfo()
print(recordingInfo.wavelengths)
print(recordingInfo.whiteReference)
print(recordingInfo.darkReference)
The wavelengths
here are a simple list of floating point values of the wavelengths
associated with the lambda dimension of the resulting HSI data, while
the whiteReference
and darkReference
entries are the standardized versions of the user-supplied white and
dark references – in this case cubes in BIP storage order in the same
data type that the context also returns.
That context may then be used to process the buffers that are obtained
from the instrument device. Before coming to that though, it should be
noted that a buffer container may also be used to hold standardized
recording data, not just raw buffers. Using the function
createBufferContainerForRecordingContext()
the user may create a buffer container that will hold the result of
this recording context instead of just raw buffers. The following
example shows how to create a buffer container that will hold up to
500 PushBroom lines for the given recording context:
recordingData = fluxEngine.createBufferContainerForRecordingContext(ctx, 500)
The following example now shows how to actually record some data:
camera.startAcquisition(p)
for i in range(150):
buffer = camera.retrieveBuffer(1000)
if buffer is not None:
ctx.setSourceData(buffer)
ctx.processNext()
recordingData.addLastResult(ctx)
camera.returnBuffer(buffer)
camera.stopAcquisition()
The variable recordingData
will then contain the HSI cube data that
was recorded. The result may then be used to store in e.g. an ENVI
file, see the section on file I/O for details on how to store such a
recording on disk.
Processing data from an instrument
Processing data from an instrument, like recording standardized data, also requires the creation of a processing context, but this time a model has to be specified as well.
Assuming that a white and dark reference have been measured, the following code shows how to create a processing context for model data processing:
# load the model somehow
model = fluxEngine.Model(...)
# Specify the white and dark references
ip = fluxEngine.InstrumentParameters()
ip.whiteReference = whiteBuffer
ip.darkReference = darkBuffer
contextType = fluxEngine.ProcessingContext.InstrumentProcessing
ctx = fluxEngine.ProcessingContext(model, contextType,
device=camera,
instrumentParameters=ip)
Acquisition works similar to the recording example, but the data can be retrieved via the output sinks of the model, see the processing chapter for more details:
camera.startAcquisition(p)
for i in range(30):
buffer = camera.retrieveBuffer(1000)
if buffer is not None:
ctx.setSourceData(buffer)
ctx.processNext()
data = ctx.outputSinkData(0)
# Do something with data
camera.returnBuffer(buffer)
camera.stopAcquisition()
The precise structure of the data will depend on the model the user has chosen to load.