interface ISemanticBuffer (Niantic.ARDK.AR.Awareness.Semantics.ISemanticBuffer)

Overview

interface ISemanticBuffer:
    Niantic.ARDK.AR.Awareness.IDataBuffer,
    IDisposable {
    // properties

    uint ChannelCount;
    string[] ChannelNames;

    // methods

    bool CreateOrUpdateTextureARGB32(
        ref Texture2D texture,
        int channelIndex,
        FilterMode filterMode = FilterMode.Point
    );

    bool CreateOrUpdateTextureARGB32(
        ref Texture2D texture,
        int[] channels,
        FilterMode filterMode = FilterMode.Point
    );

    bool CreateOrUpdateTextureRFloat(
        ref Texture2D texture,
        FilterMode filterMode = FilterMode.Point
    );

    bool DoesChannelExist(int channelIndex);
    bool DoesChannelExist(string channelName);
    bool DoesChannelExistAt(int x, int y, int channelIndex);
    bool DoesChannelExistAt(int x, int y, string channelName);

    bool DoesChannelExistAt(
        Vector2 point,
        int viewportWidth,
        int viewportHeight,
        int channelIndex
    );

    bool DoesChannelExistAt(
        Vector2 point,
        int viewportWidth,
        int viewportHeight,
        string channelName
    );

    bool DoesChannelExistAt(Vector2 uv, int channelIndex);
    bool DoesChannelExistAt(Vector2 uv, string channelName);
    ISemanticBuffer FitToViewport(int viewportWidth, int viewportHeight);
    int GetChannelIndex(string channelName);
    UInt32 GetChannelTextureMask(int channelIndex);
    UInt32 GetChannelTextureMask(int[] channelIndices);
    UInt32 GetChannelTextureMask(string channelName);
    UInt32 GetChannelTextureMask(string[] channelNames);

    ISemanticBuffer Interpolate(
        IARCamera arCamera,
        int viewportWidth,
        int viewportHeight,
        float backProjectionDistance = AwarenessParameters.DefaultBackProjectionDistance
    );

    ISemanticBuffer RotateToScreenOrientation();
    UInt32 Sample(Vector2 uv);
    UInt32 Sample(Vector2 uv, Matrix4x4 transform);
};

Inherited Members

public:
    // properties

    UInt32 Height;
    CameraIntrinsics Intrinsics;
    bool IsKeyframe;
    Matrix4x4 ViewMatrix;
    UInt32 Width;
    NativeArray<T> Data;

    // methods

    IAwarenessBuffer GetCopy();

Detailed Documentation

Properties

uint ChannelCount

The number of channels contained in this buffer.

string[] ChannelNames

An array of semantic class names, in the order their channels appear in the data.

Methods

bool CreateOrUpdateTextureARGB32(
    ref Texture2D texture,
    int channelIndex,
    FilterMode filterMode = FilterMode.Point
)

Update (or create, if needed) a texture with data of one of this buffer’s channels.

Parameters:

texture

Reference to the texture to copy to. This method will create a texture if the reference is null.

channelIndex

Channel index of the semantic class to copy.

filterMode

Texture filtering mode.

Returns:

True if the buffer was successfully copied to the given texture.

bool CreateOrUpdateTextureARGB32(
    ref Texture2D texture,
    int[] channels,
    FilterMode filterMode = FilterMode.Point
)

Update (or create, if needed) a texture with data composited of multiple channels from this buffer.

Parameters:

texture

Reference to the texture to copy to. This method will create a texture if the reference is null.

channels

Semantic channel indices to copy to this texture.

filterMode

Texture filtering mode.

Returns:

True if the buffer was successfully copied to the given texture.

bool CreateOrUpdateTextureRFloat(
    ref Texture2D texture,
    FilterMode filterMode = FilterMode.Point
)

Update (or create, if needed) a texture with this data of the entire buffer.

Parameters:

croppedRect

Rectangle defining how to crop the buffer’s data before copying to the texture.

texture

Reference to the texture to copy to. This method will create a texture if the reference is null.

Returns:

True if the buffer was successfully copied to the given texture.

bool DoesChannelExist(int channelIndex)

Check if a certain channel exists anywhere in this buffer.

Parameters:

channelIndex

Channel index of the semantic class to look for.

Returns:

True if the channel exists.

bool DoesChannelExist(string channelName)

Check if a certain channel exists anywhere in this buffer.

Parameters:

channelName

Name of the semantic class to look for.

Returns:

True if the channel exists.

bool DoesChannelExistAt(int x, int y, int channelIndex)

Check if a pixel in this semantic buffer contains a certain channel.

Parameters:

x

Pixel position on the x-axis.

y

Pixel position on the y-axis.

channelIndex

Channel index of the semantic class to look for.

Returns:

True if the channel exists at the given coordinates.

bool DoesChannelExistAt(int x, int y, string channelName)

Check if a pixel in this semantic buffer contains a certain channel.

Parameters:

x

Pixel position on the x-axis.

y

Pixel position on the y-axis.

channelName

Name of the semantic class to look for.

Returns:

True if the channel exists at the given coordinates.

bool DoesChannelExistAt(
    Vector2 point,
    int viewportWidth,
    int viewportHeight,
    int channelIndex
)

Check if a pixel in this semantic buffer contains a certain channel. This method samples the semantics buffer using normalised viewport coordinates.

Parameters:

point

Normalised viewport coordinates. The bottom-left is (0,0); the top-right is (1,1).

viewportWidth

Width of the viewport. In most cases this equals to the rendering camera’s pixel width.

viewportHeight

Height of the viewport. In most cases this equals to the rendering camera’s pixel height.

channelIndex

Channel index of the semantic class to look for.

Returns:

True if the channel exists at the given coordinates.

bool DoesChannelExistAt(
    Vector2 point,
    int viewportWidth,
    int viewportHeight,
    string channelName
)

Check if a pixel in this semantic buffer contains a certain channel. This method samples the semantics buffer using normalised viewport coordinates.

Parameters:

point

Normalised viewport coordinates. The bottom-left is (0,0); the top-right is (1,1).

viewportWidth

Width of the viewport. In most cases this equals to the rendering camera’s pixel width.

viewportHeight

Height of the viewport. In most cases this equals to the rendering camera’s pixel height.

channelName

Name of the semantic class to look for.

Returns:

True if the channel exists at the given coordinates.

bool DoesChannelExistAt(Vector2 uv, int channelIndex)

Check if a pixel in this semantic buffer contains a certain channel. This method samples the semantics buffer using normalised texture coordinates.

Parameters:

uv

Normalised texture coordinates. The bottom-left is (0,1); the top-right is (1,0).

channelIndex

Channel index of the semantic class to look for.

Returns:

True if the channel exists at the given coordinates.

bool DoesChannelExistAt(Vector2 uv, string channelName)

Check if a pixel in this semantic buffer contains a certain channel. This method samples the semantics buffer using normalised texture coordinates.

Parameters:

uv

Normalised texture coordinates. The bottom-left is (0,1); the top-right is (1,0).

channelName

Name of the semantic class to look for.

Returns:

True if the channel exists at the given coordinates.

ISemanticBuffer FitToViewport(int viewportWidth, int viewportHeight)

Sizes the semantic buffer to the given dimensions.

Parameters:

viewportWidth

Width of the viewport. In most cases this equals to the rendering camera’s pixel width.

viewportHeight

Height of the viewport. In most cases this equals to the rendering camera’s pixel height.

Returns:

A new buffer sized to the given viewport dimensions, and rotated to the screen rotation

int GetChannelIndex(string channelName)

Get the channel index of a specified semantic class.

Parameters:

channelName

Name of semantic class.

Returns:

The index of the specified semantic class, or -1 if the channel does not exist.

UInt32 GetChannelTextureMask(int channelIndex)

Get a mask with only the specified channel’s bit enabled. Can be used to quickly check if a channel exists at a particular pixel in this semantic buffer.

Parameters:

channelIndex

Channel index of the semantic class to mask for.

Returns:

A mask with only the specified channel’s bit enabled.

UInt32 GetChannelTextureMask(int[] channelIndices)

Get a mask with only the specified channels’ bits enabled. Can be used to quickly check if a set of channels exists at a particular pixel in this semantic buffer.

Parameters:

channelIndices

Channel indices of the semantic classes to mask for.

Returns:

A mask with only the specified channels’ bits enabled.

UInt32 GetChannelTextureMask(string channelName)

Get a mask with only the specified channel’s bit enabled. Can be used to quickly check if a channel exists at a particular pixel in this semantic buffer.

Parameters:

channelName

Name of the semantic class to mask for.

Returns:

A mask with only the specified channel’s bit enabled.

UInt32 GetChannelTextureMask(string[] channelNames)

Get a mask with only the specified channels’ bits enabled. Can be used to quickly check if a set of channels exists at a particular pixel in this semantic buffer.

Parameters:

channelNames

Names of the semantic classes to mask for.

Returns:

A mask with only the specified channels’ bits enabled.

ISemanticBuffer Interpolate(
    IARCamera arCamera,
    int viewportWidth,
    int viewportHeight,
    float backProjectionDistance = AwarenessParameters.DefaultBackProjectionDistance
)

Interpolate the semantic buffer using the given camera and viewport information. Since the semantic buffer served by an ARFrame was likely generated using a camera image from a previous frame, always interpolate the buffer in order to get the best semantic segmentation output.

Parameters:

arCamera

ARCamera with the pose to interpolate this buffer to.

viewportWidth

Width of the viewport. In most cases this equals to the rendering camera’s pixel width. This is used to calculate the new projection matrix.

viewportHeight

Height of the viewport. In most cases this equals to the rendering camera’s pixel height. This is used to calculate the new projection matrix.

backProjectionDistance

This value sets the normalized distance of the back-projection plane. Lower values result in outputs more accurate for closer pixels, but pixels further away will move faster than they should. Use 0.5f if your subject in the scene is always closer than ~2 meters from the device, and use 1.0f if your subject is further away most of the time.

Returns:

A new semantic buffer with data interpolated using the camera and viewport inputs.

ISemanticBuffer RotateToScreenOrientation()

Rotates the semantic buffer so it is oriented to the screen.

Returns:

A new semantic buffer rotated.

UInt32 Sample(Vector2 uv)

Returns the nearest value to the specified normalized coordinates in the buffer.

Parameters:

uv

Normalized coordinates.

Returns:

The value in the semantic buffer at the nearest location to the coordinates.

UInt32 Sample(Vector2 uv, Matrix4x4 transform)

Returns the nearest value to the specified normalized coordinates in the buffer.

Parameters:

uv

Normalized coordinates.

transform

2D transformation applied to normalized coordinates before sampling. This transformation should convert to the depth buffer’s coordinate frame.

Returns:

The value in the semantic buffer at the nearest location to the transformed coordinates.