An attempt at getting image data back
This commit is contained in:
@@ -0,0 +1,639 @@
|
||||
.. SPDX-License-Identifier: CC-BY-SA-4.0
|
||||
|
||||
Using libcamera in a C++ application
|
||||
====================================
|
||||
|
||||
This tutorial shows how to create a C++ application that uses libcamera to
|
||||
interface with a camera on a system, capture frames from it for 3 seconds, and
|
||||
write metadata about the frames to standard output.
|
||||
|
||||
Application skeleton
|
||||
--------------------
|
||||
|
||||
Most of the code in this tutorial runs in the ``int main()`` function
|
||||
with a separate global function to handle events. The two functions need
|
||||
to share data, which are stored in global variables for simplicity. A
|
||||
production-ready application would organize the various objects created
|
||||
in classes, and the event handler would be a class member function to
|
||||
provide context data without requiring global variables.
|
||||
|
||||
Use the following code snippets as the initial application skeleton.
|
||||
It already lists all the necessary includes directives and instructs the
|
||||
compiler to use the libcamera namespace, which gives access to the libcamera
|
||||
defined names and types without the need of prefixing them.
|
||||
|
||||
.. code:: cpp
|
||||
|
||||
#include <iomanip>
|
||||
#include <iostream>
|
||||
#include <memory>
|
||||
#include <thread>
|
||||
|
||||
#include <libcamera/libcamera.h>
|
||||
|
||||
using namespace libcamera;
|
||||
using namespace std::chrono_literals;
|
||||
|
||||
int main()
|
||||
{
|
||||
// Code to follow
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
Camera Manager
|
||||
--------------
|
||||
|
||||
Every libcamera-based application needs an instance of a `CameraManager`_ that
|
||||
runs for the life of the application. When the Camera Manager starts, it
|
||||
enumerates all the cameras detected in the system. Behind the scenes, libcamera
|
||||
abstracts and manages the complex pipelines that kernel drivers expose through
|
||||
the `Linux Media Controller`_ and `Video for Linux`_ (V4L2) APIs, meaning that
|
||||
an application doesn't need to handle device or driver specific details.
|
||||
|
||||
.. _CameraManager: https://libcamera.org/api-html/classlibcamera_1_1CameraManager.html
|
||||
.. _Linux Media Controller: https://www.kernel.org/doc/html/latest/media/uapi/mediactl/media-controller-intro.html
|
||||
.. _Video for Linux: https://www.linuxtv.org/docs.php
|
||||
|
||||
Before the ``int main()`` function, create a global shared pointer
|
||||
variable for the camera to support the event call back later:
|
||||
|
||||
.. code:: cpp
|
||||
|
||||
static std::shared_ptr<Camera> camera;
|
||||
|
||||
Create a Camera Manager instance at the beginning of the main function, and then
|
||||
start it. An application must only create a single Camera Manager instance.
|
||||
|
||||
The CameraManager can be stored in a unique_ptr to automate deleting the
|
||||
instance when it is no longer used, but care must be taken to ensure all
|
||||
cameras are released explicitly before this happens.
|
||||
|
||||
.. code:: cpp
|
||||
|
||||
std::unique_ptr<CameraManager> cm = std::make_unique<CameraManager>();
|
||||
cm->start();
|
||||
|
||||
During the application initialization, the Camera Manager is started to
|
||||
enumerate all the supported devices and create cameras that the application can
|
||||
interact with.
|
||||
|
||||
Once the camera manager is started, we can use it to iterate the available
|
||||
cameras in the system:
|
||||
|
||||
.. code:: cpp
|
||||
|
||||
for (auto const &camera : cm->cameras())
|
||||
std::cout << camera->id() << std::endl;
|
||||
|
||||
Printing the camera id lists the machine-readable unique identifiers, so for
|
||||
example, the output on a Linux machine with a connected USB webcam is
|
||||
``\_SB_.PCI0.XHC_.RHUB.HS08-8:1.0-5986:2115``.
|
||||
|
||||
What libcamera considers a camera
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
The libcamera library considers any unique source of video frames, which usually
|
||||
correspond to a camera sensor, as a single camera device. Camera devices expose
|
||||
streams, which are obtained by processing data from the single image source and
|
||||
all share some basic properties such as the frame duration and the image
|
||||
exposure time, as they only depend by the image source configuration.
|
||||
|
||||
Applications select one or multiple Camera devices they wish to operate on, and
|
||||
require frames from at least one of their Streams.
|
||||
|
||||
Create and acquire a camera
|
||||
---------------------------
|
||||
|
||||
This example application uses a single camera (the first enumerated one) that
|
||||
the Camera Manager reports as available to applications.
|
||||
|
||||
Camera devices are stored by the CameraManager in a list accessible by index, or
|
||||
can be retrieved by name through the ``CameraManager::get()`` function. The
|
||||
code below retrieves the name of the first available camera and gets the camera
|
||||
by name from the Camera Manager, after making sure that at least one camera is
|
||||
available.
|
||||
|
||||
.. code:: cpp
|
||||
|
||||
auto cameras = cm->cameras();
|
||||
if (cameras.empty()) {
|
||||
std::cout << "No cameras were identified on the system."
|
||||
<< std::endl;
|
||||
cm->stop();
|
||||
return EXIT_FAILURE;
|
||||
}
|
||||
|
||||
std::string cameraId = cameras[0]->id();
|
||||
|
||||
auto camera = cm->get(cameraId);
|
||||
/*
|
||||
* Note that `camera` may not compare equal to `cameras[0]`.
|
||||
* In fact, it might simply be a `nullptr`, as the particular
|
||||
* device might have disappeared (and reappeared) in the meantime.
|
||||
*/
|
||||
|
||||
Once a camera has been selected an application needs to acquire an exclusive
|
||||
lock to it so no other application can use it.
|
||||
|
||||
.. code:: cpp
|
||||
|
||||
camera->acquire();
|
||||
|
||||
Configure the camera
|
||||
--------------------
|
||||
|
||||
Before the application can do anything with the camera, it needs to configure
|
||||
the image format and sizes of the streams it wants to capture frames from.
|
||||
|
||||
Stream configurations are represented by instances of the
|
||||
``StreamConfiguration`` class, which are grouped together in a
|
||||
``CameraConfiguration`` object. Before an application can start setting its
|
||||
desired configuration, a ``CameraConfiguration`` instance needs to be generated
|
||||
from the ``Camera`` device using the ``Camera::generateConfiguration()``
|
||||
function.
|
||||
|
||||
The libcamera library uses the ``StreamRole`` enumeration to define predefined
|
||||
ways an application intends to use a camera. The
|
||||
``Camera::generateConfiguration()`` function accepts a list of desired roles and
|
||||
generates a ``CameraConfiguration`` with the best stream parameters
|
||||
configuration for each of the requested roles. If the camera can handle the
|
||||
requested roles, it returns an initialized ``CameraConfiguration`` and a null
|
||||
pointer if it can't.
|
||||
|
||||
It is possible for applications to generate an empty ``CameraConfiguration``
|
||||
instance by not providing any role. The desired configuration will have to be
|
||||
filled-in manually and manually validated.
|
||||
|
||||
In the example application, create a new configuration variable and use the
|
||||
``Camera::generateConfiguration`` function to produce a ``CameraConfiguration``
|
||||
for the single ``StreamRole::Viewfinder`` role.
|
||||
|
||||
.. code:: cpp
|
||||
|
||||
std::unique_ptr<CameraConfiguration> config = camera->generateConfiguration( { StreamRole::Viewfinder } );
|
||||
|
||||
The generated ``CameraConfiguration`` has a ``StreamConfiguration`` instance for
|
||||
each ``StreamRole`` the application requested. Each of these has a default size
|
||||
and format that the camera assigned, and a list of supported pixel formats and
|
||||
sizes.
|
||||
|
||||
The code below accesses the first and only ``StreamConfiguration`` item in the
|
||||
``CameraConfiguration`` and outputs its parameters to standard output.
|
||||
|
||||
.. code:: cpp
|
||||
|
||||
StreamConfiguration &streamConfig = config->at(0);
|
||||
std::cout << "Default viewfinder configuration is: " << streamConfig.toString() << std::endl;
|
||||
|
||||
This is expected to output something like:
|
||||
|
||||
``Default viewfinder configuration is: 1280x720-MJPEG``
|
||||
|
||||
Change and validate the configuration
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
With an initialized ``CameraConfiguration``, an application can make changes to
|
||||
the parameters it contains, for example, to change the width and height, use the
|
||||
following code:
|
||||
|
||||
.. code:: cpp
|
||||
|
||||
streamConfig.size.width = 640;
|
||||
streamConfig.size.height = 480;
|
||||
|
||||
If an application changes any parameters, it must validate the configuration
|
||||
before applying it to the camera using the ``CameraConfiguration::validate()``
|
||||
function. If the new values are not supported by the ``Camera`` device, the
|
||||
validation process adjusts the parameters to what it considers to be the closest
|
||||
supported values.
|
||||
|
||||
The ``validate`` function returns a `Status`_ which applications shall check to
|
||||
see if the Pipeline Handler adjusted the configuration.
|
||||
|
||||
.. _Status: https://libcamera.org/api-html/classlibcamera_1_1CameraConfiguration.html#a64163f21db2fe1ce0a6af5a6f6847744
|
||||
|
||||
For example, the code above set the width and height to 640x480, but if the
|
||||
camera cannot produce an image that large, it might adjust the configuration to
|
||||
the supported size of 320x240 and return ``Adjusted`` as validation status
|
||||
result.
|
||||
|
||||
If the configuration to validate cannot be adjusted to a set of supported
|
||||
values, the validation procedure fails and returns the ``Invalid`` status.
|
||||
|
||||
For this example application, the code below prints the adjusted values to
|
||||
standard out.
|
||||
|
||||
.. code:: cpp
|
||||
|
||||
config->validate();
|
||||
std::cout << "Validated viewfinder configuration is: " << streamConfig.toString() << std::endl;
|
||||
|
||||
For example, the output might be something like
|
||||
|
||||
``Validated viewfinder configuration is: 320x240-MJPEG``
|
||||
|
||||
A validated ``CameraConfiguration`` can bet given to the ``Camera`` device to be
|
||||
applied to the system.
|
||||
|
||||
.. code:: cpp
|
||||
|
||||
camera->configure(config.get());
|
||||
|
||||
If an application doesn't first validate the configuration before calling
|
||||
``Camera::configure()``, there's a chance that calling the function can fail, if
|
||||
the given configuration would have to be adjusted.
|
||||
|
||||
Allocate FrameBuffers
|
||||
---------------------
|
||||
|
||||
An application needs to reserve the memory that libcamera can write incoming
|
||||
frames and data to, and that the application can then read. The libcamera
|
||||
library uses ``FrameBuffer`` instances to represent memory buffers allocated in
|
||||
memory. An application should reserve enough memory for the frame size the
|
||||
streams need based on the configured image sizes and formats.
|
||||
|
||||
The libcamera library consumes buffers provided by applications as
|
||||
``FrameBuffer`` instances, which makes libcamera a consumer of buffers exported
|
||||
by other devices (such as displays or video encoders), or allocated from an
|
||||
external allocator (such as ION on Android).
|
||||
|
||||
In some situations, applications do not have any means to allocate or get hold
|
||||
of suitable buffers, for instance, when no other device is involved, or on Linux
|
||||
platforms that lack a centralized allocator. The ``FrameBufferAllocator`` class
|
||||
provides a buffer allocator an application can use in these situations.
|
||||
|
||||
An application doesn't have to use the default ``FrameBufferAllocator`` that
|
||||
libcamera provides. It can instead allocate memory manually and pass the buffers
|
||||
in ``Request``\s (read more about ``Request`` in `the frame capture section
|
||||
<#frame-capture>`_ of this guide). The example in this guide covers using the
|
||||
``FrameBufferAllocator`` that libcamera provides.
|
||||
|
||||
Using the libcamera ``FrameBufferAllocator``
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Applications create a ``FrameBufferAllocator`` for a Camera and use it
|
||||
to allocate buffers for streams of a ``CameraConfiguration`` with the
|
||||
``allocate()`` function.
|
||||
|
||||
The list of allocated buffers can be retrieved using the ``Stream`` instance
|
||||
as the parameter of the ``FrameBufferAllocator::buffers()`` function.
|
||||
|
||||
.. code:: cpp
|
||||
|
||||
FrameBufferAllocator *allocator = new FrameBufferAllocator(camera);
|
||||
|
||||
for (StreamConfiguration &cfg : *config) {
|
||||
int ret = allocator->allocate(cfg.stream());
|
||||
if (ret < 0) {
|
||||
std::cerr << "Can't allocate buffers" << std::endl;
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
size_t allocated = allocator->buffers(cfg.stream()).size();
|
||||
std::cout << "Allocated " << allocated << " buffers for stream" << std::endl;
|
||||
}
|
||||
|
||||
Frame Capture
|
||||
~~~~~~~~~~~~~
|
||||
|
||||
The libcamera library implements a streaming model based on per-frame requests.
|
||||
For each frame an application wants to capture it must queue a request for it to
|
||||
the camera. With libcamera, a ``Request`` is at least one ``Stream`` associated
|
||||
with a ``FrameBuffer`` representing the memory location where frames have to be
|
||||
stored.
|
||||
|
||||
First, by using the ``Stream`` instance associated to each
|
||||
``StreamConfiguration``, retrieve the list of ``FrameBuffer``\s created for it
|
||||
using the frame allocator. Then create a vector of requests to be submitted to
|
||||
the camera.
|
||||
|
||||
.. code:: cpp
|
||||
|
||||
Stream *stream = streamConfig.stream();
|
||||
const std::vector<std::unique_ptr<FrameBuffer>> &buffers = allocator->buffers(stream);
|
||||
std::vector<std::unique_ptr<Request>> requests;
|
||||
|
||||
Proceed to fill the request vector by creating ``Request`` instances from the
|
||||
camera device, and associate a buffer for each of them for the ``Stream``.
|
||||
|
||||
.. code:: cpp
|
||||
|
||||
for (unsigned int i = 0; i < buffers.size(); ++i) {
|
||||
std::unique_ptr<Request> request = camera->createRequest();
|
||||
if (!request)
|
||||
{
|
||||
std::cerr << "Can't create request" << std::endl;
|
||||
return -ENOMEM;
|
||||
}
|
||||
|
||||
const std::unique_ptr<FrameBuffer> &buffer = buffers[i];
|
||||
int ret = request->addBuffer(stream, buffer.get());
|
||||
if (ret < 0)
|
||||
{
|
||||
std::cerr << "Can't set buffer for request"
|
||||
<< std::endl;
|
||||
return ret;
|
||||
}
|
||||
|
||||
requests.push_back(std::move(request));
|
||||
}
|
||||
|
||||
.. TODO: Controls
|
||||
|
||||
.. TODO: A request can also have controls or parameters that you can apply to the image.
|
||||
|
||||
Event handling and callbacks
|
||||
----------------------------
|
||||
|
||||
The libcamera library uses the concept of `signals and slots` (similar to `Qt
|
||||
Signals and Slots`_) to connect events with callbacks to handle them.
|
||||
|
||||
.. _signals and slots: https://libcamera.org/api-html/classlibcamera_1_1Signal.html#details
|
||||
.. _Qt Signals and Slots: https://doc.qt.io/qt-6/signalsandslots.html
|
||||
|
||||
The ``Camera`` device emits two signals that applications can connect to in
|
||||
order to execute callbacks on frame completion events.
|
||||
|
||||
The ``Camera::bufferCompleted`` signal notifies applications that a buffer with
|
||||
image data is available. Receiving notifications about the single buffer
|
||||
completion event allows applications to implement partial request completion
|
||||
support, and to inspect the buffer content before the request it is part of has
|
||||
fully completed.
|
||||
|
||||
The ``Camera::requestCompleted`` signal notifies applications that a request
|
||||
has completed, which means all the buffers the request contains have now
|
||||
completed. Request completion notifications are always emitted in the same order
|
||||
as the requests have been queued to the camera.
|
||||
|
||||
To receive the signals emission notifications, connect a slot function to the
|
||||
signal to handle it in the application code.
|
||||
|
||||
.. code:: cpp
|
||||
|
||||
camera->requestCompleted.connect(requestComplete);
|
||||
|
||||
For this example application, only the ``Camera::requestCompleted`` signal gets
|
||||
handled and the matching ``requestComplete`` slot function outputs information
|
||||
about the FrameBuffer to standard output. This callback is typically where an
|
||||
application accesses the image data from the camera and does something with it.
|
||||
|
||||
Signals operate in the libcamera ``CameraManager`` thread context, so it is
|
||||
important not to block the thread for a long time, as this blocks internal
|
||||
processing of the camera pipelines, and can affect realtime performance.
|
||||
|
||||
Handle request completion events
|
||||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
||||
|
||||
Create the ``requestComplete`` function by matching the slot signature:
|
||||
|
||||
.. code:: cpp
|
||||
|
||||
static void requestComplete(Request *request)
|
||||
{
|
||||
// Code to follow
|
||||
}
|
||||
|
||||
Request completion events can be emitted for requests which have been canceled,
|
||||
for example, by unexpected application shutdown. To avoid an application
|
||||
processing invalid image data, it's worth checking that the request has
|
||||
completed successfully. The list of request completion statuses is available in
|
||||
the `Request::Status`_ class enum documentation.
|
||||
|
||||
.. _Request::Status: https://www.libcamera.org/api-html/classlibcamera_1_1Request.html#a2209ba8d51af8167b25f6e3e94d5c45b
|
||||
|
||||
.. code:: cpp
|
||||
|
||||
if (request->status() == Request::RequestCancelled)
|
||||
return;
|
||||
|
||||
If the ``Request`` has completed successfully, applications can access the
|
||||
completed buffers using the ``Request::buffers()`` function, which returns a map
|
||||
of ``FrameBuffer`` instances associated with the ``Stream`` that produced the
|
||||
images.
|
||||
|
||||
.. code:: cpp
|
||||
|
||||
const std::map<const Stream *, FrameBuffer *> &buffers = request->buffers();
|
||||
|
||||
Iterating through the map allows applications to inspect each completed buffer
|
||||
in this request, and access the metadata associated to each frame.
|
||||
|
||||
The metadata buffer contains information such the capture status, a timestamp,
|
||||
and the bytes used, as described in the `FrameMetadata`_ documentation.
|
||||
|
||||
.. _FrameMetaData: https://libcamera.org/api-html/structlibcamera_1_1FrameMetadata.html
|
||||
|
||||
.. code:: cpp
|
||||
|
||||
for (auto bufferPair : buffers) {
|
||||
FrameBuffer *buffer = bufferPair.second;
|
||||
const FrameMetadata &metadata = buffer->metadata();
|
||||
}
|
||||
|
||||
For this example application, inside the ``for`` loop from above, we can print
|
||||
the Frame sequence number and details of the planes.
|
||||
|
||||
.. code:: cpp
|
||||
|
||||
std::cout << " seq: " << std::setw(6) << std::setfill('0') << metadata.sequence << " bytesused: ";
|
||||
|
||||
unsigned int nplane = 0;
|
||||
for (const FrameMetadata::Plane &plane : metadata.planes())
|
||||
{
|
||||
std::cout << plane.bytesused;
|
||||
if (++nplane < metadata.planes().size()) std::cout << "/";
|
||||
}
|
||||
|
||||
std::cout << std::endl;
|
||||
|
||||
The expected output shows each monotonically increasing frame sequence number
|
||||
and the bytes used by planes.
|
||||
|
||||
.. code:: text
|
||||
|
||||
seq: 000000 bytesused: 1843200
|
||||
seq: 000002 bytesused: 1843200
|
||||
seq: 000004 bytesused: 1843200
|
||||
seq: 000006 bytesused: 1843200
|
||||
seq: 000008 bytesused: 1843200
|
||||
seq: 000010 bytesused: 1843200
|
||||
seq: 000012 bytesused: 1843200
|
||||
seq: 000014 bytesused: 1843200
|
||||
seq: 000016 bytesused: 1843200
|
||||
seq: 000018 bytesused: 1843200
|
||||
seq: 000020 bytesused: 1843200
|
||||
seq: 000022 bytesused: 1843200
|
||||
seq: 000024 bytesused: 1843200
|
||||
seq: 000026 bytesused: 1843200
|
||||
seq: 000028 bytesused: 1843200
|
||||
seq: 000030 bytesused: 1843200
|
||||
seq: 000032 bytesused: 1843200
|
||||
seq: 000034 bytesused: 1843200
|
||||
seq: 000036 bytesused: 1843200
|
||||
seq: 000038 bytesused: 1843200
|
||||
seq: 000040 bytesused: 1843200
|
||||
seq: 000042 bytesused: 1843200
|
||||
|
||||
A completed buffer contains of course image data which can be accessed through
|
||||
the per-plane dma-buf file descriptor transported by the ``FrameBuffer``
|
||||
instance. An example of how to write image data to disk is available in the
|
||||
`FileSink class`_ which is a part of the ``cam`` utility application in the
|
||||
libcamera repository.
|
||||
|
||||
.. _FileSink class: https://git.libcamera.org/libcamera/libcamera.git/tree/src/cam/file_sink.cpp
|
||||
|
||||
With the handling of this request completed, it is possible to re-use the
|
||||
request and the associated buffers and re-queue it to the camera
|
||||
device:
|
||||
|
||||
.. code:: cpp
|
||||
|
||||
request->reuse(Request::ReuseBuffers);
|
||||
camera->queueRequest(request);
|
||||
|
||||
Request queueing
|
||||
----------------
|
||||
|
||||
The ``Camera`` device is now ready to receive frame capture requests and
|
||||
actually start delivering frames. In order to prepare for that, an application
|
||||
needs to first start the camera, and queue requests to it for them to be
|
||||
processed.
|
||||
|
||||
In the main() function, just after having connected the
|
||||
``Camera::requestCompleted`` signal to the callback handler, start the camera
|
||||
and queue all the previously created requests.
|
||||
|
||||
.. code:: cpp
|
||||
|
||||
camera->start();
|
||||
for (std::unique_ptr<Request> &request : requests)
|
||||
camera->queueRequest(request.get());
|
||||
|
||||
Event processing
|
||||
~~~~~~~~~~~~~~~~
|
||||
|
||||
libcamera creates an internal execution thread at `CameraManager::start()`_
|
||||
time to decouple its own event processing from the application's main thread.
|
||||
Applications are thus free to manage their own execution opportunely, and only
|
||||
need to respond to events generated by libcamera emitted through signals.
|
||||
|
||||
.. _CameraManager::start(): https://libcamera.org/api-html/classlibcamera_1_1CameraManager.html#a49e322880a2a26013bb0076788b298c5
|
||||
|
||||
Real-world applications will likely either integrate with the event loop of the
|
||||
framework they use, or create their own event loop to respond to user events.
|
||||
For the simple application presented in this example, it is enough to prevent
|
||||
immediate termination by pausing for 3 seconds. During that time, the libcamera
|
||||
thread will generate request completion events that the application will handle
|
||||
in the ``requestComplete()`` slot connected to the ``Camera::requestCompleted``
|
||||
signal.
|
||||
|
||||
.. code:: cpp
|
||||
|
||||
std::this_thread::sleep_for(3000ms);
|
||||
|
||||
Clean up and stop the application
|
||||
---------------------------------
|
||||
|
||||
The application is now finished with the camera and the resources the camera
|
||||
uses, so needs to do the following:
|
||||
|
||||
- stop the camera
|
||||
- free the buffers in the FrameBufferAllocator and delete it
|
||||
- release the lock on the camera and reset the pointer to it
|
||||
- stop the camera manager
|
||||
|
||||
.. code:: cpp
|
||||
|
||||
camera->stop();
|
||||
allocator->free(stream);
|
||||
delete allocator;
|
||||
camera->release();
|
||||
camera.reset();
|
||||
cm->stop();
|
||||
|
||||
return 0;
|
||||
|
||||
In this instance the CameraManager will automatically be deleted by the
|
||||
unique_ptr implementation when it goes out of scope.
|
||||
|
||||
Build and run instructions
|
||||
--------------------------
|
||||
|
||||
To build the application, we recommend that you use the `Meson build system`_
|
||||
which is also the official build system of the libcamera library.
|
||||
|
||||
Make sure both ``meson`` and ``libcamera`` are installed in your system. Please
|
||||
refer to your distribution documentation to install meson and install the most
|
||||
recent version of libcamera from the `git repository`_. You would also need to
|
||||
install the ``pkg-config`` tool to correctly identify the libcamera.so object
|
||||
install location in the system.
|
||||
|
||||
.. _Meson build system: https://mesonbuild.com/
|
||||
.. _git repository: https://git.libcamera.org/libcamera/libcamera.git/
|
||||
|
||||
Dependencies
|
||||
~~~~~~~~~~~~
|
||||
|
||||
The test application presented here depends on the libcamera library to be
|
||||
available in a path that meson can identify. The libcamera install procedure
|
||||
performed using the ``ninja install`` command may by default deploy the
|
||||
libcamera components in the ``/usr/local/lib`` path, or a package manager may
|
||||
install it to ``/usr/lib`` depending on your distribution. If meson is unable to
|
||||
find the location of the libcamera installation, you may need to instruct meson
|
||||
to look into a specific path when searching for ``libcamera.so`` by setting the
|
||||
``PKG_CONFIG_PATH`` environment variable to the right location.
|
||||
|
||||
Adjust the following command to use the ``pkgconfig`` directory where libcamera
|
||||
has been installed in your system.
|
||||
|
||||
.. code:: shell
|
||||
|
||||
export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig/
|
||||
|
||||
Verify that ``pkg-config`` can identify the ``libcamera`` library with
|
||||
|
||||
.. code:: shell
|
||||
|
||||
$ pkg-config --libs --cflags libcamera
|
||||
-I/usr/local/include/libcamera -L/usr/local/lib -lcamera -lcamera-base
|
||||
|
||||
``meson`` can alternatively use ``cmake`` to locate packages, please refer to
|
||||
the ``meson`` documentation if you prefer to use it in place of ``pkgconfig``
|
||||
|
||||
Build file
|
||||
~~~~~~~~~~
|
||||
|
||||
With the dependencies correctly identified, prepare a ``meson.build`` build file
|
||||
to be placed in the same directory where the application lives. You can
|
||||
name your application as you like, but be sure to update the following snippet
|
||||
accordingly. In this example, the application file has been named
|
||||
``simple-cam.cpp``.
|
||||
|
||||
.. code::
|
||||
|
||||
project('simple-cam', 'cpp')
|
||||
|
||||
simple_cam = executable('simple-cam',
|
||||
'simple-cam.cpp',
|
||||
dependencies: dependency('libcamera', required : true))
|
||||
|
||||
The ``dependencies`` line instructs meson to ask ``pkgconfig`` (or ``cmake``) to
|
||||
locate the ``libcamera`` library, which the test application will be
|
||||
dynamically linked against.
|
||||
|
||||
With the build file in place, compile and run the application with:
|
||||
|
||||
.. code:: shell
|
||||
|
||||
$ meson build
|
||||
$ cd build
|
||||
$ ninja
|
||||
$ ./simple-cam
|
||||
|
||||
It is possible to increase the library debug output by using environment
|
||||
variables which control the library log filtering system:
|
||||
|
||||
.. code:: shell
|
||||
|
||||
$ LIBCAMERA_LOG_LEVELS=0 ./simple-cam
|
||||
319
spider-cam/libcamera/Documentation/guides/introduction.rst
Normal file
319
spider-cam/libcamera/Documentation/guides/introduction.rst
Normal file
@@ -0,0 +1,319 @@
|
||||
.. SPDX-License-Identifier: CC-BY-SA-4.0
|
||||
|
||||
Developers guide to libcamera
|
||||
=============================
|
||||
|
||||
The Linux kernel handles multimedia devices through the 'Linux media' subsystem
|
||||
and provides a set of APIs (application programming interfaces) known
|
||||
collectively as V4L2 (`Video for Linux 2`_) and the `Media Controller`_ API
|
||||
which provide an interface to interact and control media devices.
|
||||
|
||||
Included in this subsystem are drivers for camera sensors, CSI2 (Camera
|
||||
Serial Interface) receivers, and ISPs (Image Signal Processors)
|
||||
|
||||
The usage of these drivers to provide a functioning camera stack is a
|
||||
responsibility that lies in userspace which is commonly implemented separately
|
||||
by vendors without a common architecture or API for application developers.
|
||||
|
||||
libcamera provides a complete camera stack for Linux based systems to abstract
|
||||
functionality desired by camera application developers and process the
|
||||
configuration of hardware and image control algorithms required to obtain
|
||||
desirable results from the camera.
|
||||
|
||||
.. _Video for Linux 2: https://www.linuxtv.org/downloads/v4l-dvb-apis-new/userspace-api/v4l/v4l2.html
|
||||
.. _Media Controller: https://www.linuxtv.org/downloads/v4l-dvb-apis-new/userspace-api/mediactl/media-controller.html
|
||||
|
||||
|
||||
In this developers guide, we will explore the `Camera Stack`_ and how it is
|
||||
can be visualised at a high level, and explore the internal `Architecture`_ of
|
||||
the libcamera library with its components. The current `Platform Support`_ is
|
||||
detailed, as well as an overview of the `Licensing`_ requirements of the
|
||||
project.
|
||||
|
||||
This introduction is followed by a walkthrough tutorial to newcomers wishing to
|
||||
support a new platform with the `Pipeline Handler Writers Guide`_ and for those
|
||||
looking to make use of the libcamera native API an `Application Writers Guide`_
|
||||
provides a tutorial of the key APIs exposed by libcamera.
|
||||
|
||||
.. _Pipeline Handler Writers Guide: pipeline-handler.html
|
||||
.. _Application Writers Guide: application-developer.html
|
||||
|
||||
.. TODO: Correctly link to the other articles of the guide
|
||||
|
||||
Camera Stack
|
||||
------------
|
||||
|
||||
The libcamera library is implemented in userspace, and makes use of underlying
|
||||
kernel drivers that directly interact with hardware.
|
||||
|
||||
Applications can make use of libcamera through the native `libcamera API`_'s or
|
||||
through an adaptation layer integrating libcamera into a larger framework.
|
||||
|
||||
.. _libcamera API: https://www.libcamera.org/api-html/index.html
|
||||
|
||||
::
|
||||
|
||||
Application Layer
|
||||
/ +--------------+ +--------------+ +--------------+ +--------------+
|
||||
| | Native | | Framework | | Native | | Android |
|
||||
| | V4L2 | | Application | | libcamera | | Camera |
|
||||
| | Application | | (gstreamer) | | Application | | Framework |
|
||||
\ +--------------+ +--------------+ +--------------+ +--------------+
|
||||
|
||||
^ ^ ^ ^
|
||||
| | | |
|
||||
| | | |
|
||||
v v | v
|
||||
Adaptation Layer |
|
||||
/ +--------------+ +--------------+ | +--------------+
|
||||
| | V4L2 | | gstreamer | | | Android |
|
||||
| | Compatibility| | element | | | Camera |
|
||||
| | (preload) | |(libcamerasrc)| | | HAL |
|
||||
\ +--------------+ +--------------+ | +--------------+
|
||||
|
|
||||
^ ^ | ^
|
||||
| | | |
|
||||
| | | |
|
||||
v v v v
|
||||
libcamera Framework
|
||||
/ +--------------------------------------------------------------------+
|
||||
| | |
|
||||
| | libcamera |
|
||||
| | |
|
||||
\ +--------------------------------------------------------------------+
|
||||
|
||||
^ ^ ^
|
||||
Userspace | | |
|
||||
--------------------- | ---------------- | ---------------- | ---------------
|
||||
Kernel | | |
|
||||
v v v
|
||||
|
||||
+-----------+ +-----------+ +-----------+
|
||||
| Media | <--> | Video | <--> | V4L2 |
|
||||
| Device | | Device | | Subdev |
|
||||
+-----------+ +-----------+ +-----------+
|
||||
|
||||
The camera stack comprises of four software layers. From bottom to top:
|
||||
|
||||
* The kernel drivers control the camera hardware and expose a low-level
|
||||
interface to userspace through the Linux kernel V4L2 family of APIs
|
||||
(Media Controller API, V4L2 Video Device API and V4L2 Subdev API).
|
||||
|
||||
* The libcamera framework is the core part of the stack. It handles all control
|
||||
of the camera devices in its core component, libcamera, and exposes a native
|
||||
C++ API to upper layers.
|
||||
|
||||
* The libcamera adaptation layer is an umbrella term designating the components
|
||||
that interface to libcamera in other frameworks. Notable examples are the V4L2
|
||||
compatibility layer, the gstreamer libcamera element, and the Android camera
|
||||
HAL implementation based on libcamera which are provided as a part of the
|
||||
libcamera project.
|
||||
|
||||
* The applications and upper level frameworks are based on the libcamera
|
||||
framework or libcamera adaptation, and are outside of the scope of the
|
||||
libcamera project, however example native applications (cam, qcam) are
|
||||
provided for testing.
|
||||
|
||||
|
||||
V4L2 Compatibility Layer
|
||||
V4L2 compatibility is achieved through a shared library that traps all
|
||||
accesses to camera devices and routes them to libcamera to emulate high-level
|
||||
V4L2 camera devices. It is injected in a process address space through
|
||||
``LD_PRELOAD`` and is completely transparent for applications.
|
||||
|
||||
The compatibility layer exposes camera device features on a best-effort basis,
|
||||
and aims for the level of features traditionally available from a UVC camera
|
||||
designed for video conferencing.
|
||||
|
||||
Android Camera HAL
|
||||
Camera support for Android is achieved through a generic Android camera HAL
|
||||
implementation on top of libcamera. The HAL implements features required by
|
||||
Android and out of scope from libcamera, such as JPEG encoding support.
|
||||
|
||||
This component is used to provide support for ChromeOS platforms
|
||||
|
||||
GStreamer element (gstlibcamerasrc)
|
||||
A `GStreamer element`_ is provided to allow capture from libcamera supported
|
||||
devices through GStreamer pipelines, and connect to other elements for further
|
||||
processing.
|
||||
|
||||
Development of this element is ongoing and is limited to a single stream.
|
||||
|
||||
Native libcamera API
|
||||
Applications can make use of the libcamera API directly using the C++
|
||||
API. An example application and walkthrough using the libcamera API can be
|
||||
followed in the `Application Writers Guide`_
|
||||
|
||||
.. _GStreamer element: https://gstreamer.freedesktop.org/documentation/application-development/basics/elements.html
|
||||
|
||||
Architecture
|
||||
------------
|
||||
|
||||
While offering a unified API towards upper layers, and presenting itself as a
|
||||
single library, libcamera isn't monolithic. It exposes multiple components
|
||||
through its public API and is built around a set of separate helpers internally.
|
||||
Hardware abstractions are handled through the use of device-specific components
|
||||
where required and dynamically loadable plugins are used to separate image
|
||||
processing algorithms from the core libcamera codebase.
|
||||
|
||||
::
|
||||
|
||||
--------------------------< libcamera Public API >---------------------------
|
||||
^ ^
|
||||
| |
|
||||
v v
|
||||
+-------------+ +---------------------------------------------------+
|
||||
| Camera | | Camera Device |
|
||||
| Manager | | +-----------------------------------------------+ |
|
||||
+-------------+ | | Device-Agnostic | |
|
||||
^ | | | |
|
||||
| | | +--------------------------+ |
|
||||
| | | | ~~~~~~~~~~~~~~~~~~~~~~~ |
|
||||
| | | | { +-----------------+ } |
|
||||
| | | | } | //// Image //// | { |
|
||||
| | | | <-> | / Processing // | } |
|
||||
| | | | } | / Algorithms // | { |
|
||||
| | | | { +-----------------+ } |
|
||||
| | | | ~~~~~~~~~~~~~~~~~~~~~~~ |
|
||||
| | | | ========================== |
|
||||
| | | | +-----------------+ |
|
||||
| | | | | // Pipeline /// | |
|
||||
| | | | <-> | /// Handler /// | |
|
||||
| | | | | /////////////// | |
|
||||
| | +--------------------+ +-----------------+ |
|
||||
| | Device-Specific |
|
||||
| +---------------------------------------------------+
|
||||
| ^ ^
|
||||
| | |
|
||||
v v v
|
||||
+--------------------------------------------------------------------+
|
||||
| Helpers and Support Classes |
|
||||
| +-------------+ +-------------+ +-------------+ +-------------+ |
|
||||
| | MC & V4L2 | | Buffers | | Sandboxing | | Plugins | |
|
||||
| | Support | | Allocator | | IPC | | Manager | |
|
||||
| +-------------+ +-------------+ +-------------+ +-------------+ |
|
||||
| +-------------+ +-------------+ |
|
||||
| | Pipeline | | ... | |
|
||||
| | Runner | | | |
|
||||
| +-------------+ +-------------+ |
|
||||
+--------------------------------------------------------------------+
|
||||
|
||||
/// Device-Specific Components
|
||||
~~~ Sandboxing
|
||||
|
||||
|
||||
Camera Manager
|
||||
The Camera Manager enumerates cameras and instantiates Pipeline Handlers to
|
||||
manage each Camera that libcamera supports. The Camera Manager supports
|
||||
hotplug detection and notification events when supported by the underlying
|
||||
kernel devices.
|
||||
|
||||
There is only ever one instance of the Camera Manager running per application.
|
||||
Each application's instance of the Camera Manager ensures that only a single
|
||||
application can take control of a camera device at once.
|
||||
|
||||
Read the `Camera Manager API`_ documentation for more details.
|
||||
|
||||
.. _Camera Manager API: https://libcamera.org/api-html/classlibcamera_1_1CameraManager.html
|
||||
|
||||
Camera Device
|
||||
The Camera class represents a single item of camera hardware that is capable
|
||||
of producing one or more image streams, and provides the API to interact with
|
||||
the underlying device.
|
||||
|
||||
If a system has multiple instances of the same hardware attached, each has its
|
||||
own instance of the camera class.
|
||||
|
||||
The API exposes full control of the device to upper layers of libcamera through
|
||||
the public API, making it the highest level object libcamera exposes, and the
|
||||
object that all other API operations interact with from configuration to
|
||||
capture.
|
||||
|
||||
Read the `Camera API`_ documentation for more details.
|
||||
|
||||
.. _Camera API: https://libcamera.org/api-html/classlibcamera_1_1Camera.html
|
||||
|
||||
Pipeline Handler
|
||||
The Pipeline Handler manages the complex pipelines exposed by the kernel
|
||||
drivers through the Media Controller and V4L2 APIs. It abstracts pipeline
|
||||
handling to hide device-specific details from the rest of the library, and
|
||||
implements both pipeline configuration based on stream configuration, and
|
||||
pipeline runtime execution and scheduling when needed by the device.
|
||||
|
||||
The Pipeline Handler lives in the same process as the rest of the library, and
|
||||
has access to all helpers and kernel camera-related devices.
|
||||
|
||||
Hardware abstraction is handled by device specific Pipeline Handlers which are
|
||||
derived from the Pipeline Handler base class allowing commonality to be shared
|
||||
among the implementations.
|
||||
|
||||
Derived pipeline handlers create Camera device instances based on the devices
|
||||
they detect and support on the running system, and are responsible for
|
||||
managing the interactions with a camera device.
|
||||
|
||||
More details can be found in the `PipelineHandler API`_ documentation, and the
|
||||
`Pipeline Handler Writers Guide`_.
|
||||
|
||||
.. _PipelineHandler API: https://libcamera.org/api-html/classlibcamera_1_1PipelineHandler.html
|
||||
|
||||
Image Processing Algorithms
|
||||
An image processing algorithm (IPA) component is a loadable plugin that
|
||||
implements 3A (Auto-Exposure, Auto-White Balance, and Auto-Focus) and other
|
||||
algorithms.
|
||||
|
||||
The algorithms run on the CPU and interact with the camera devices through the
|
||||
Pipeline Handler to control hardware image processing based on the parameters
|
||||
supplied by upper layers, maintaining state and closing the control loop
|
||||
of the ISP.
|
||||
|
||||
The component is sandboxed and can only interact with libcamera through the
|
||||
API provided by the Pipeline Handler and an IPA has no direct access to kernel
|
||||
camera devices.
|
||||
|
||||
Open source IPA modules built with libcamera can be run in the same process
|
||||
space as libcamera, however external IPA modules are run in a separate process
|
||||
from the main libcamera process. IPA modules have a restricted view of the
|
||||
system, including no access to networking APIs and limited access to file
|
||||
systems.
|
||||
|
||||
IPA modules are only required for platforms and devices with an ISP controlled
|
||||
by the host CPU. Camera sensors which have an integrated ISP are not
|
||||
controlled through the IPA module.
|
||||
|
||||
Platform Support
|
||||
----------------
|
||||
|
||||
The library currently supports the following hardware platforms specifically
|
||||
with dedicated pipeline handlers:
|
||||
|
||||
- Intel IPU3 (ipu3)
|
||||
- Rockchip RK3399 (rkisp1)
|
||||
- RaspberryPi 3 and 4 (rpi/vc4)
|
||||
|
||||
Furthermore, generic platform support is provided for the following:
|
||||
|
||||
- USB video device class cameras (uvcvideo)
|
||||
- iMX7, Allwinner Sun6i (simple)
|
||||
- Virtual media controller driver for test use cases (vimc)
|
||||
|
||||
Licensing
|
||||
---------
|
||||
|
||||
The libcamera core, is covered by the `LGPL-2.1-or-later`_ license. Pipeline
|
||||
Handlers are a part of the libcamera code base and need to be contributed
|
||||
upstream by device vendors. IPA modules included in libcamera are covered by a
|
||||
free software license, however third-parties may develop IPA modules outside of
|
||||
libcamera and distribute them under a closed-source license, provided they do
|
||||
not include source code from the libcamera project.
|
||||
|
||||
The libcamera project itself contains multiple libraries, applications and
|
||||
utilities. Licenses are expressed through SPDX tags in text-based files that
|
||||
support comments, and through the .reuse/dep5 file otherwise. A copy of all
|
||||
licenses are stored in the LICENSES directory, and a full summary of the
|
||||
licensing used throughout the project can be found in the COPYING.rst document.
|
||||
|
||||
Applications which link dynamically against libcamera and use only the public
|
||||
API are an independent work of the authors and have no license restrictions
|
||||
imposed upon them from libcamera.
|
||||
|
||||
.. _LGPL-2.1-or-later: https://spdx.org/licenses/LGPL-2.1-or-later.html
|
||||
531
spider-cam/libcamera/Documentation/guides/ipa.rst
Normal file
531
spider-cam/libcamera/Documentation/guides/ipa.rst
Normal file
@@ -0,0 +1,531 @@
|
||||
.. SPDX-License-Identifier: CC-BY-SA-4.0
|
||||
|
||||
IPA Writer's Guide
|
||||
==================
|
||||
|
||||
IPA modules are Image Processing Algorithm modules. They provide functionality
|
||||
that the pipeline handler can use for image processing.
|
||||
|
||||
This guide covers the definition of the IPA interface, and how to plumb the
|
||||
connection between the pipeline handler and the IPA.
|
||||
|
||||
The IPA interface and protocol
|
||||
------------------------------
|
||||
|
||||
The IPA interface defines the interface between the pipeline handler and the
|
||||
IPA. Specifically, it defines the functions that the IPA exposes that the
|
||||
pipeline handler can call, and the signals that the pipeline handler can
|
||||
connect to, in order to receive data from the IPA asynchronously. In addition,
|
||||
it contains any custom data structures that the pipeline handler and IPA may
|
||||
pass to each other.
|
||||
|
||||
It is possible to use the same IPA interface with multiple pipeline handlers
|
||||
on different hardware platforms. Generally in such cases, these platforms would
|
||||
have a common hardware ISP pipeline. For instance, the rkisp1 pipeline handler
|
||||
supports both the RK3399 and the i.MX8MP as they integrate the same ISP.
|
||||
However, the i.MX8MP has a more complex camera pipeline, which may call for a
|
||||
dedicated pipeline handler in the future. As the ISP is the same as for RK3399,
|
||||
the same IPA interface could be used for both pipeline handlers. The build files
|
||||
provide a mapping from pipeline handler to the IPA interface name as detailed in
|
||||
:ref:`compiling-section`.
|
||||
|
||||
The IPA protocol refers to the agreement between the pipeline handler and the
|
||||
IPA regarding the expected response(s) from the IPA for given calls to the IPA.
|
||||
This protocol doesn't need to be declared anywhere in code, but it shall be
|
||||
documented, as there may be multiple IPA implementations for one pipeline
|
||||
handler.
|
||||
|
||||
As part of the design of libcamera, IPAs may be isolated in a separate process,
|
||||
or run in the same process but a different thread from libcamera. The pipeline
|
||||
handler and IPA shall not have to change their operation based on whether the
|
||||
IPA is isolated or not, but the possibility of isolation needs to be kept in
|
||||
mind. Therefore all data that is passed between them must be serializable, so
|
||||
they must be defined separately in the `mojo Interface Definition Language`_
|
||||
(IDL), and a code generator will generate headers and serializers corresponding
|
||||
to the definitions. Every interface is defined in a mojom file and includes:
|
||||
|
||||
- the functions that the pipeline handler can call from the IPA
|
||||
- signals in the pipeline handler that the IPA can emit
|
||||
- any data structures that are to be passed between the pipeline handler and the IPA
|
||||
|
||||
All IPA modules of a given pipeline handler use the same IPA interface. The IPA
|
||||
interface definition is thus written by the pipeline handler author, based on
|
||||
how they design the interactions between the pipeline handler and the IPA.
|
||||
|
||||
The entire IPA interface, including the functions, signals, and any custom
|
||||
structs shall be defined in a file named {interface_name}.mojom under
|
||||
include/libcamera/ipa/.
|
||||
|
||||
.. _mojo Interface Definition Language: https://chromium.googlesource.com/chromium/src.git/+/master/mojo/public/tools/bindings/README.md
|
||||
|
||||
Namespacing
|
||||
-----------
|
||||
|
||||
To avoid name collisions between data types defined by different IPA interfaces
|
||||
and data types defined by libcamera, each IPA interface must be defined in its
|
||||
own namespace.
|
||||
|
||||
The namespace is specific with mojo's module directive. It must be the first
|
||||
non-comment line in the mojo data definition file. For example, the Raspberry
|
||||
Pi IPA interface uses:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
module ipa.rpi;
|
||||
|
||||
This will become the ipa::rpi namespace in C++ code.
|
||||
|
||||
Data containers
|
||||
---------------
|
||||
|
||||
Since the data passed between the pipeline handler and the IPA must support
|
||||
serialization, any custom data containers must be defined with the mojo IDL.
|
||||
|
||||
The following list of libcamera objects are supported in the interface
|
||||
definition, and may be used as function parameter types or struct field types:
|
||||
|
||||
- libcamera.ControlInfoMap
|
||||
- libcamera.ControlList
|
||||
- libcamera.FileDescriptor
|
||||
- libcamera.IPABuffer
|
||||
- libcamera.IPACameraSensorInfo
|
||||
- libcamera.IPASettings
|
||||
- libcamera.IPAStream
|
||||
- libcamera.Point
|
||||
- libcamera.Rectangle
|
||||
- libcamera.Size
|
||||
- libcamera.SizeRange
|
||||
|
||||
To use them, core.mojom must be included in the mojo data definition file:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
import "include/libcamera/ipa/core.mojom";
|
||||
|
||||
Other custom structs may be defined and used as well. There is no requirement
|
||||
that they must be defined before usage. enums and structs are supported.
|
||||
|
||||
The following is an example of a definition of an enum, for the purpose of
|
||||
being used as flags:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
enum ConfigParameters {
|
||||
ConfigLsTable = 0x01,
|
||||
ConfigStaggeredWrite = 0x02,
|
||||
ConfigSensor = 0x04,
|
||||
ConfigDropFrames = 0x08,
|
||||
};
|
||||
|
||||
The following is an example of a definition of a struct:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
struct ConfigInput {
|
||||
uint32 op;
|
||||
uint32 transform;
|
||||
libcamera.FileDescriptor lsTableHandle;
|
||||
int32 lsTableHandleStatic = -1;
|
||||
map<uint32, libcamera.IPAStream> streamConfig;
|
||||
array<libcamera.IPABuffer> buffers;
|
||||
};
|
||||
|
||||
This example has some special things about it. First of all, it uses the
|
||||
FileDescriptor data type. This type must be used to ensure that the file
|
||||
descriptor that it contains is translated properly across the IPC boundary
|
||||
(when the IPA is in an isolated process).
|
||||
|
||||
This does mean that if the file descriptor should be sent without being
|
||||
translated (for example, for the IPA to tell the pipeline handler which
|
||||
fd *that the pipeline handler holds* to act on), then it must be in a
|
||||
regular int32 type.
|
||||
|
||||
This example also illustrates that struct fields may have default values, as
|
||||
is assigned to lsTableHandleStatic. This is the value that the field will
|
||||
take when the struct is constructed with the default constructor.
|
||||
|
||||
Arrays and maps are supported as well. They are translated to C++ vectors and
|
||||
maps, respectively. The members of the arrays and maps are embedded, and cannot
|
||||
be const.
|
||||
|
||||
Note that nullable fields, static-length arrays, handles, and unions, which
|
||||
are supported by mojo, are not supported by our code generator.
|
||||
|
||||
The Main IPA interface
|
||||
----------------------
|
||||
|
||||
The IPA interface is split in two parts, the Main IPA interface, which
|
||||
describes the functions that the pipeline handler can call from the IPA,
|
||||
and the Event IPA interface, which describes the signals received by the
|
||||
pipeline handler that the IPA can emit. Both must be defined. This section
|
||||
focuses on the Main IPA interface.
|
||||
|
||||
The main interface must be named as IPA{interface_name}Interface.
|
||||
|
||||
The functions that the pipeline handler can call from the IPA may be
|
||||
synchronous or asynchronous. Synchronous functions do not return until the IPA
|
||||
returns from the function, while asynchronous functions return immediately
|
||||
without waiting for the IPA to return.
|
||||
|
||||
At a minimum, the following three functions must be present (and implemented):
|
||||
|
||||
- init();
|
||||
- start();
|
||||
- stop();
|
||||
|
||||
All three of these functions are synchronous. The parameters for start() and
|
||||
init() may be customized.
|
||||
|
||||
init() initializes the IPA interface. It shall be called before any other
|
||||
function of the IPAInterface.
|
||||
|
||||
stop() informs the IPA module that the camera is stopped. The IPA module shall
|
||||
release resources prepared in start().
|
||||
|
||||
A configure() function is recommended. Any ControlInfoMap instances that will be
|
||||
used by the IPA must be sent to the IPA from the pipeline handler, at configure
|
||||
time, for example.
|
||||
|
||||
All input parameters will become const references, except for arithmetic types,
|
||||
which will be passed by value. Output parameters will become pointers, unless
|
||||
the first output parameter is an int32, or there is only one primitive output
|
||||
parameter, in which case it will become a regular return value.
|
||||
|
||||
const is not allowed inside of arrays and maps. mojo arrays will become C++
|
||||
std::vector<>.
|
||||
|
||||
By default, all functions defined in the main interface are synchronous. This
|
||||
means that in the case of IPC (i.e. isolated IPA), the function call will not
|
||||
return until the return value or output parameters are ready. To specify an
|
||||
asynchronous function, the [async] attribute can be used. Asynchronous
|
||||
functions must not have any return value or output parameters, since in the
|
||||
case of IPC the call needs to return immediately.
|
||||
|
||||
It is also possible that the IPA will not be run in isolation. In this case,
|
||||
the IPA thread will not exist until start() is called. This means that in the
|
||||
case of no isolation, asynchronous calls cannot be made before start(). Since
|
||||
the IPA interface must be the same regardless of isolation, the same
|
||||
restriction applies to the case of isolation, and any function that will be
|
||||
called before start() must be synchronous.
|
||||
|
||||
In addition, any call made after start() and before stop() must be
|
||||
asynchronous. The motivation for this is to avoid damaging real-time
|
||||
performance of the pipeline handler. If the pipeline handler wants some data
|
||||
from the IPA, the IPA should return the data asynchronously via an event
|
||||
(see "The Event IPA interface").
|
||||
|
||||
The following is an example of a main interface definition:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
interface IPARPiInterface {
|
||||
init(libcamera.IPASettings settings, string sensorName)
|
||||
=> (int32 ret, bool metadataSupport);
|
||||
start() => (int32 ret);
|
||||
stop();
|
||||
|
||||
configure(libcamera.IPACameraSensorInfo sensorInfo,
|
||||
map<uint32, libcamera.IPAStream> streamConfig,
|
||||
map<uint32, libcamera.ControlInfoMap> entityControls,
|
||||
ConfigInput ipaConfig)
|
||||
=> (int32 ret, ConfigOutput results);
|
||||
|
||||
mapBuffers(array<IPABuffer> buffers);
|
||||
unmapBuffers(array<uint32> ids);
|
||||
|
||||
[async] signalStatReady(uint32 bufferId);
|
||||
[async] signalQueueRequest(libcamera.ControlList controls);
|
||||
[async] signalIspPrepare(ISPConfig data);
|
||||
};
|
||||
|
||||
|
||||
The first three functions are the required functions. Functions do not need to
|
||||
have return values, like stop(), mapBuffers(), and unmapBuffers(). In the case
|
||||
of asynchronous functions, as explained before, they *must not* have return
|
||||
values.
|
||||
|
||||
The Event IPA interface
|
||||
-----------------------
|
||||
|
||||
The event IPA interface describes the signals received by the pipeline handler
|
||||
that the IPA can emit. It must be defined. If there are no event functions,
|
||||
then it may be empty. These emissions are meant to notify the pipeline handler
|
||||
of some event, such as request data is ready, and *must not* be used to drive
|
||||
the camera pipeline from the IPA.
|
||||
|
||||
The event interface must be named as IPA{interface_name}EventInterface.
|
||||
|
||||
Functions defined in the event interface are implicitly asynchronous.
|
||||
Thus they cannot return any value. Specifying the [async] tag is not
|
||||
necessary.
|
||||
|
||||
Functions defined in the event interface will become signals in the IPA
|
||||
interface. The IPA can emit signals, while the pipeline handler can connect
|
||||
slots to them.
|
||||
|
||||
The following is an example of an event interface definition:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
interface IPARPiEventInterface {
|
||||
statsMetadataComplete(uint32 bufferId,
|
||||
libcamera.ControlList controls);
|
||||
runIsp(uint32 bufferId);
|
||||
embeddedComplete(uint32 bufferId);
|
||||
setIsp(libcamera.ControlList controls);
|
||||
setStaggered(libcamera.ControlList controls);
|
||||
};
|
||||
|
||||
.. _compiling-section:
|
||||
|
||||
Compiling the IPA interface
|
||||
---------------------------
|
||||
|
||||
After the IPA interface is defined in include/libcamera/ipa/{interface_name}.mojom,
|
||||
an entry for it must be added in meson so that it can be compiled. The filename
|
||||
must be added to the pipeline_ipa_mojom_mapping variable in
|
||||
include/libcamera/ipa/meson.build. This variable maps the pipeline handler name
|
||||
to its IPA interface file.
|
||||
|
||||
For example, adding the raspberrypi.mojom file to meson:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
pipeline_ipa_mojom_mapping = [
|
||||
'rpi/vc4': 'raspberrypi.mojom',
|
||||
]
|
||||
|
||||
This will cause the mojo data definition file to be compiled. Specifically, it
|
||||
generates five files:
|
||||
|
||||
- a header describing the custom data structures, and the complete IPA
|
||||
interface (at {$build_dir}/include/libcamera/ipa/{interface}_ipa_interface.h)
|
||||
|
||||
- a serializer implementing de/serialization for the custom data structures (at
|
||||
{$build_dir}/include/libcamera/ipa/{interface}_ipa_serializer.h)
|
||||
|
||||
- a proxy header describing a specialized IPA proxy (at
|
||||
{$build_dir}/include/libcamera/ipa/{interface}_ipa_proxy.h)
|
||||
|
||||
- a proxy source implementing the IPA proxy (at
|
||||
{$build_dir}/src/libcamera/proxy/{interface}_ipa_proxy.cpp)
|
||||
|
||||
- a proxy worker source implementing the other end of the IPA proxy (at
|
||||
{$build_dir}/src/libcamera/proxy/worker/{interface}_ipa_proxy_worker.cpp)
|
||||
|
||||
The IPA proxy serves as the layer between the pipeline handler and the IPA, and
|
||||
handles threading vs isolation transparently. The pipeline handler and the IPA
|
||||
only require the interface header and the proxy header. The serializer is only
|
||||
used internally by the proxy.
|
||||
|
||||
Using the custom data structures
|
||||
--------------------------------
|
||||
|
||||
To use the custom data structures that are defined in the mojo data definition
|
||||
file, the following header must be included:
|
||||
|
||||
.. code-block:: C++
|
||||
|
||||
#include <libcamera/ipa/{interface_name}_ipa_interface.h>
|
||||
|
||||
The POD types of the structs simply become their C++ counterparts, eg. uint32
|
||||
in mojo will become uint32_t in C++. mojo map becomes C++ std::map, and mojo
|
||||
array becomes C++ std::vector. All members of maps and vectors are embedded,
|
||||
and are not pointers. The members cannot be const.
|
||||
|
||||
The names of all the fields of structs can be used in C++ in exactly the same
|
||||
way as they are defined in the data definition file. For example, the following
|
||||
struct as defined in the mojo file:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
struct SensorConfig {
|
||||
uint32 gainDelay = 1;
|
||||
uint32 exposureDelay;
|
||||
uint32 sensorMetadata;
|
||||
};
|
||||
|
||||
Will become this in C++:
|
||||
|
||||
.. code-block:: C++
|
||||
|
||||
struct SensorConfig {
|
||||
uint32_t gainDelay;
|
||||
uint32_t exposureDelay;
|
||||
uint32_t sensorMetadata;
|
||||
};
|
||||
|
||||
The generated structs will also have two constructors, a constructor that
|
||||
fills all fields with the default values, and a second constructor that takes
|
||||
a value for every field. The default value constructor will fill in the fields
|
||||
with the specified default value if it exists. In the above example, `gainDelay_`
|
||||
will be initialized to 1. If no default value is specified, then it will be
|
||||
filled in as zero (or -1 for a FileDescriptor type).
|
||||
|
||||
All fields and constructors/destructors in these generated structs are public.
|
||||
|
||||
Using the IPA interface (pipeline handler)
|
||||
------------------------------------------
|
||||
|
||||
The following headers are necessary to use an IPA in the pipeline handler
|
||||
(with raspberrypi as an example):
|
||||
|
||||
.. code-block:: C++
|
||||
|
||||
#include <libcamera/ipa/raspberrypi_ipa_interface.h>
|
||||
#include <libcamera/ipa/raspberrypi_ipa_proxy.h>
|
||||
|
||||
The first header includes definitions of the custom data structures, and
|
||||
the definition of the complete IPA interface (including both the Main and
|
||||
the Event IPA interfaces). The name of the header file comes from the name
|
||||
of the mojom file, which in this case was raspberrypi.mojom.
|
||||
|
||||
The second header includes the definition of the specialized IPA proxy. It
|
||||
exposes the complete IPA interface. We will see how to use it in this section.
|
||||
|
||||
In the pipeline handler, we first need to construct a specialized IPA proxy.
|
||||
From the point of view of the pipeline hander, this is the object that is the
|
||||
IPA.
|
||||
|
||||
To do so, we invoke the IPAManager:
|
||||
|
||||
.. code-block:: C++
|
||||
|
||||
std::unique_ptr<ipa::rpi::IPAProxyRPi> ipa_ =
|
||||
IPAManager::createIPA<ipa::rpi::IPAProxyRPi>(pipe_, 1, 1);
|
||||
|
||||
The ipa::rpi namespace comes from the namespace that we defined in the mojo
|
||||
data definition file, in the "Namespacing" section. The name of the proxy,
|
||||
IPAProxyRPi, comes from the name given to the main IPA interface,
|
||||
IPARPiInterface, in the "The Main IPA interface" section.
|
||||
|
||||
The return value of IPAManager::createIPA shall be error-checked, to confirm
|
||||
that the returned pointer is not a nullptr.
|
||||
|
||||
After this, before initializing the IPA, slots should be connected to all of
|
||||
the IPA's signals, as defined in the Event IPA interface:
|
||||
|
||||
.. code-block:: C++
|
||||
|
||||
ipa_->statsMetadataComplete.connect(this, &RPiCameraData::statsMetadataComplete);
|
||||
ipa_->runIsp.connect(this, &RPiCameraData::runIsp);
|
||||
ipa_->embeddedComplete.connect(this, &RPiCameraData::embeddedComplete);
|
||||
ipa_->setIsp.connect(this, &RPiCameraData::setIsp);
|
||||
ipa_->setStaggered.connect(this, &RPiCameraData::setStaggered);
|
||||
|
||||
The slot functions have a function signature based on the function definition
|
||||
in the Event IPA interface. All plain old data (POD) types are as-is (with
|
||||
their C++ versions, eg. uint32 -> uint32_t), and all structs are const references.
|
||||
|
||||
For example, for the following entry in the Event IPA interface:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
statsMetadataComplete(uint32 bufferId, ControlList controls);
|
||||
|
||||
A function with the following function signature shall be connected to the
|
||||
signal:
|
||||
|
||||
.. code-block:: C++
|
||||
|
||||
void statsMetadataComplete(uint32_t bufferId, const ControlList &controls);
|
||||
|
||||
After connecting the slots to the signals, the IPA should be initialized
|
||||
(using the main interface definition example from earlier):
|
||||
|
||||
.. code-block:: C++
|
||||
|
||||
IPASettings settings{};
|
||||
bool metadataSupport;
|
||||
int ret = ipa_->init(settings, "sensor name", &metadataSupport);
|
||||
|
||||
At this point, any IPA functions that were defined in the Main IPA interface
|
||||
can be called as if they were regular member functions, for example (based on
|
||||
the main interface definition example from earlier):
|
||||
|
||||
.. code-block:: C++
|
||||
|
||||
ipa_->start();
|
||||
int ret = ipa_->configure(sensorInfo_, streamConfig, entityControls, ipaConfig, &result);
|
||||
ipa_->signalStatReady(RPi::BufferMask::STATS | static_cast<unsigned int>(index));
|
||||
|
||||
Remember that any functions designated as asynchronous *must not* be called
|
||||
before start().
|
||||
|
||||
Notice that for both init() and configure(), the first output parameter is a
|
||||
direct return, since it is an int32, while the other output parameter is a
|
||||
pointer-based output parameter.
|
||||
|
||||
Using the IPA interface (IPA Module)
|
||||
------------------------------------
|
||||
|
||||
The following header is necessary to implement an IPA Module (with raspberrypi
|
||||
as an example):
|
||||
|
||||
.. code-block:: C++
|
||||
|
||||
#include <libcamera/ipa/raspberrypi_ipa_interface.h>
|
||||
|
||||
This header includes definitions of the custom data structures, and
|
||||
the definition of the complete IPA interface (including both the Main and
|
||||
the Event IPA interfaces). The name of the header file comes from the name
|
||||
of the mojom file, which in this case was raspberrypi.mojom.
|
||||
|
||||
The IPA module must implement the IPA interface class that is defined in the
|
||||
header. In the case of our example, that is ipa::rpi::IPARPiInterface. The
|
||||
ipa::rpi namespace comes from the namespace that we defined in the mojo data
|
||||
definition file, in the "Namespacing" section. The name of the interface is the
|
||||
same as the name given to the Main IPA interface.
|
||||
|
||||
The function signature rules are the same as for the slots in the pipeline
|
||||
handler side; PODs are passed by value, and structs are passed by const
|
||||
reference. For the Main IPA interface, output values are also allowed (only
|
||||
for synchronous calls), so there may be output parameters as well. If the
|
||||
first output parameter is a POD it will be returned by value, otherwise
|
||||
it will be returned by an output parameter pointer. The second and any other
|
||||
output parameters will also be returned by output parameter pointers.
|
||||
|
||||
For example, for the following function specification in the Main IPA interface
|
||||
definition:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
configure(libcamera.IPACameraSensorInfo sensorInfo,
|
||||
uint32 exampleNumber,
|
||||
map<uint32, libcamera.IPAStream> streamConfig,
|
||||
map<uint32, libcamera.ControlInfoMap> entityControls,
|
||||
ConfigInput ipaConfig)
|
||||
=> (int32 ret, ConfigOutput results);
|
||||
|
||||
We will need to implement a function with the following function signature:
|
||||
|
||||
.. code-block:: C++
|
||||
|
||||
int configure(const IPACameraSensorInfo &sensorInfo,
|
||||
uint32_t exampleNumber,
|
||||
const std::map<unsigned int, IPAStream> &streamConfig,
|
||||
const std::map<unsigned int, ControlInfoMap> &entityControls,
|
||||
const ipa::rpi::ConfigInput &data,
|
||||
ipa::rpi::ConfigOutput *response);
|
||||
|
||||
The return value is int, because the first output parameter is int32. The rest
|
||||
of the output parameters (in this case, only response) become output parameter
|
||||
pointers. The non-POD input parameters become const references, and the POD
|
||||
input parameter is passed by value.
|
||||
|
||||
At any time after start() and before stop() (though usually only in response to
|
||||
an IPA call), the IPA may send data to the pipeline handler by emitting
|
||||
signals. These signals are defined in the C++ IPA interface class (which is in
|
||||
the generated and included header).
|
||||
|
||||
For example, for the following function defined in the Event IPA interface:
|
||||
|
||||
.. code-block:: none
|
||||
|
||||
statsMetadataComplete(uint32 bufferId, libcamera.ControlList controls);
|
||||
|
||||
We can emit a signal like so:
|
||||
|
||||
.. code-block:: C++
|
||||
|
||||
statsMetadataComplete.emit(bufferId & RPi::BufferMask::ID, libcameraMetadata_);
|
||||
1532
spider-cam/libcamera/Documentation/guides/pipeline-handler.rst
Normal file
1532
spider-cam/libcamera/Documentation/guides/pipeline-handler.rst
Normal file
File diff suppressed because it is too large
Load Diff
147
spider-cam/libcamera/Documentation/guides/tracing.rst
Normal file
147
spider-cam/libcamera/Documentation/guides/tracing.rst
Normal file
@@ -0,0 +1,147 @@
|
||||
.. SPDX-License-Identifier: CC-BY-SA-4.0
|
||||
|
||||
Tracing Guide
|
||||
=============
|
||||
|
||||
Guide to tracing in libcamera.
|
||||
|
||||
Profiling vs Tracing
|
||||
--------------------
|
||||
|
||||
Tracing is recording timestamps at specific locations. libcamera provides a
|
||||
tracing facility. This guide shows how to use this tracing facility.
|
||||
|
||||
Tracing should not be confused with profiling, which samples execution
|
||||
at periodic points in time. This can be done with other tools such as
|
||||
callgrind, perf, gprof, etc., without modification to the application,
|
||||
and is out of scope for this guide.
|
||||
|
||||
Compiling
|
||||
---------
|
||||
|
||||
To compile libcamera with tracing support, it must be enabled through the
|
||||
meson ``tracing`` option. It depends on the lttng-ust library (available in the
|
||||
``liblttng-ust-dev`` package for Debian-based distributions).
|
||||
By default the tracing option in meson is set to ``auto``, so if
|
||||
liblttng is detected, it will be enabled by default. Conversely, if the option
|
||||
is set to disabled, then libcamera will be compiled without tracing support.
|
||||
|
||||
Defining tracepoints
|
||||
--------------------
|
||||
|
||||
libcamera already contains a set of tracepoints. To define additional
|
||||
tracepoints, create a file
|
||||
``include/libcamera/internal/tracepoints/{file}.tp``, where ``file`` is a
|
||||
reasonable name related to the category of tracepoints that you wish to
|
||||
define. For example, the tracepoints file for the Request object is called
|
||||
``request.tp``. An entry for this file must be added in
|
||||
``include/libcamera/internal/tracepoints/meson.build``.
|
||||
|
||||
In this tracepoints file, define your tracepoints `as mandated by lttng
|
||||
<https://lttng.org/man/3/lttng-ust>`_. The header boilerplate must *not* be
|
||||
included (as it will conflict with the rest of our infrastructure), and
|
||||
only the tracepoint definitions (with the ``TRACEPOINT_*`` macros) should be
|
||||
included.
|
||||
|
||||
All tracepoint providers shall be ``libcamera``. According to lttng, the
|
||||
tracepoint provider should be per-project; this is the rationale for this
|
||||
decision. To group tracepoint events, we recommend using
|
||||
``{class_name}_{tracepoint_name}``, for example, ``request_construct`` for a
|
||||
tracepoint for the constructor of the Request class.
|
||||
|
||||
Tracepoint arguments may take C++ objects pointers, in which case the usual
|
||||
C++ namespacing rules apply. The header that contains the necessary class
|
||||
definitions must be included at the top of the tracepoint provider file.
|
||||
|
||||
Note: the final parameter in ``TP_ARGS`` *must not* have a trailing comma, and
|
||||
the parameters to ``TP_FIELDS`` are *space-separated*. Not following these will
|
||||
cause compilation errors.
|
||||
|
||||
Using tracepoints (in libcamera)
|
||||
--------------------------------
|
||||
|
||||
To use tracepoints in libcamera, first the header needs to be included:
|
||||
|
||||
``#include "libcamera/internal/tracepoints.h"``
|
||||
|
||||
Then to use the tracepoint:
|
||||
|
||||
``LIBCAMERA_TRACEPOINT({tracepoint_event}, args...)``
|
||||
|
||||
This macro must be used, as opposed to lttng's macros directly, because
|
||||
lttng is an optional dependency of libcamera, so the code must compile and run
|
||||
even when lttng is not present or when tracing is disabled.
|
||||
|
||||
The tracepoint provider name, as declared in the tracepoint definition, is not
|
||||
included in the parameters of the tracepoint.
|
||||
|
||||
There are also two special tracepoints available for tracing IPA calls:
|
||||
|
||||
``LIBCAMERA_TRACEPOINT_IPA_BEGIN({pipeline_name}, {ipa_function})``
|
||||
|
||||
``LIBCAMERA_TRACEPOINT_IPA_END({pipeline_name}, {ipa_function})``
|
||||
|
||||
These shall be placed where an IPA function is called from the pipeline handler,
|
||||
and when the pipeline handler receives the corresponding response from the IPA,
|
||||
respectively. These are the tracepoints that our sample analysis script
|
||||
(see "Analyzing a trace") scans for when computing statistics on IPA call time.
|
||||
|
||||
Using tracepoints (from an application)
|
||||
---------------------------------------
|
||||
|
||||
As applications are not part of libcamera, but rather users of libcamera,
|
||||
applications should seek their own tracing mechanisms. For ease of tracing
|
||||
the application alongside tracing libcamera, it is recommended to also
|
||||
`use lttng <https://lttng.org/docs/#doc-tracing-your-own-user-application>`_.
|
||||
|
||||
Using tracepoints (from closed-source IPA)
|
||||
------------------------------------------
|
||||
|
||||
Similar to applications, closed-source IPAs can simply use lttng on their own,
|
||||
or any other tracing mechanism if desired.
|
||||
|
||||
Collecting a trace
|
||||
------------------
|
||||
|
||||
A trace can be collected fairly simply from lttng:
|
||||
|
||||
.. code-block:: bash
|
||||
|
||||
lttng create $SESSION_NAME
|
||||
lttng enable-event -u libcamera:\*
|
||||
lttng start
|
||||
# run libcamera application
|
||||
lttng stop
|
||||
lttng view
|
||||
lttng destroy $SESSION_NAME
|
||||
|
||||
See the `lttng documentation <https://lttng.org/docs/>`_ for further details.
|
||||
|
||||
The location of the trace file is printed when running
|
||||
``lttng create $SESSION_NAME``. After destroying the session, it can still be
|
||||
viewed by: ``lttng view -t $PATH_TO_TRACE``, where ``$PATH_TO_TRACE`` is the
|
||||
path that was printed when the session was created. This is the same path that
|
||||
is used when analyzing traces programatically, as described in the next section.
|
||||
|
||||
Analyzing a trace
|
||||
-----------------
|
||||
|
||||
As mentioned above, while an lttng tracing session exists and the trace is not
|
||||
running, the trace output can be viewed as text by ``lttng view``.
|
||||
|
||||
The trace log can also be viewed as text using babeltrace2. See the
|
||||
`lttng trace analysis documentation
|
||||
<https://lttng.org/docs/#doc-viewing-and-analyzing-your-traces-bt>`_
|
||||
for further details.
|
||||
|
||||
babeltrace2 also has a C API and python bindings that can be used to process
|
||||
traces. See the
|
||||
`lttng python bindings documentation <https://babeltrace.org/docs/v2.0/python/bt2/>`_
|
||||
and the
|
||||
`lttng C API documentation <https://babeltrace.org/docs/v2.0/libbabeltrace2/>`_
|
||||
for more details.
|
||||
|
||||
As an example, there is a script ``utils/tracepoints/analyze-ipa-trace.py``
|
||||
that gathers statistics for the time taken for an IPA function call, by
|
||||
measuring the time difference between pairs of events
|
||||
``libcamera:ipa_call_start`` and ``libcamera:ipa_call_finish``.
|
||||
Reference in New Issue
Block a user