Sign Up

Deep Learning by Ngene - Toolkit for LabVIEW Download

Deep Learning Toolkit for LabVIEW

D Discussion Watch * 29 ↓5,066
 screenshot
Version9.0.2.281
ReleasedNov 06, 2025
Publisher Ngene
License Ngene Custom
LabVIEW VersionLabVIEW>=20.0
Operating System Windows
Used By ngene_deepltk_patchcore_anomaly_detection_addon  
Project links Homepage   Documentation   Repository   Discussion

Description

DeepLTK (Deep Learning Toolkit) empowers engineers and researchers to integrate deep learning directly into LabVIEW workflows. It provides a comprehensive, high-level API for building, training, analyzing, and deploying deep neural networks (DNNs) without leaving the LabVIEW environment.
Developed entirely in native LabVIEW code, DeepLTK has no external dependencies, ensuring straightforward installation, simplified development, and seamless deployment across Windows and NI Real-Time systems.

Key Features

End-to-End Workflow: Create, configure, train, and deploy deep neural networks entirely within LabVIEW.
Cross-Platform Acceleration: Run training and inference on both CPU and GPU.
Real-Time and Embedded Deployment: Deploy pre-trained networks to NI Real-Time targets and accelerate inference on FPGAs using the DeepLTK FPGA Add-on.
Network Visualization and Analysis: Visualize network topology, inspect layer parameters, and monitor performance metrics such as memory footprint and computational complexity.
Graph Optimization: Utilize graph-level optimization tools to improve inference speed and efficiency.
Custom Data and Training Control: Extend datasets with variant elements, use custom augmentations, and fine-tune solver parameters.
Diagnostics and Debugging: Access detailed layer-level error messages, validation tools, and network statistics for performance tuning.
Ready-to-Run Examples: Includes a variety of complete, real-world examples for classification, detection, and real-time deployment.

Supported Layers

Core Layers: Input (1D, 3D), Fully Connected (Linear, MLP), Convolutional (1D, 2D), Upsampling, ShortCut (Residual), Concatenation, Batch Normalization, Layer Normalization, Scaling, Activation, Pooling (Max, Avg, Global), UpSampling, Dropout, and SoftMax.
Activation Functions: Linear, Sigmoid, Tanh, ReLU, Leaky ReLU, ReLU6, Mish, Swish, and GELU.
Augmentations: Noise, Flip (Vertical, Horizontal), Brightness, Contrast, Hue, Saturation, Shear, Scale (Zoom), Blur, and Move.
Object Detection: YOLO_v2 and YOLO_v4 layers for real-time object detection applications.

Optimizers (Solvers)

SGD: Classic stochastic gradient descent with Momentum and Weight Decay.
Adam: Adaptive moment estimation optimizer with first- and second-order moment adaptation.

Loss Functions

Mean Squared Error (MSE), Cross Entropy (LogLoss), and YOLO-based Object Detection losses (v2 and v4).

Example Applications

DeepLTK ships with extensive example projects demonstrating practical use cases:

MNIST_Classifier_MLP.vi – Image classification using a multilayer perceptron.
MNIST_Classifier_CNN(Train).vi – Training a convolutional neural network for digit recognition.
MNIST_Classifier(Deploy).vi – Deploying pretrained networks for inference.
MNIST_RT_Deployment – Running inference on NI Real-Time targets.
YOLO_Object_Detection(Cam).vi – Real-time object detection using pretrained YOLO architecture.
Object_Detection_Project – Custom training workflow for object detection on user datasets.

Larger list of examples are available at:
https://github.com/ngenehub/deepltk_examples

Release Notes

9.0.2.281 (Nov 06, 2025)

v9.0.2
This is a major update introducing new functionalities and significant performance improvements(up to 2x).
Note: This version breaks backward compatibility with previous releases of the toolkit.

Features
1. Added support for two new layers: Layer Normalization and Scale.
2. Added support for a new activation function: GELU.
3. Introduced Dry Run functionality, allowing calculation of model metrics without allocating resources.
4. Exposed Set Number of Threads API to control multithreading behavior in CPU mode.
5. Added new API NN_Get_T_dT.vim to replace NN_Get_T_dT.vi, which will be deprecated in future versions.
6. Enhanced error messages to help localize issues within the network by including the layer name, type, and index.
7. Layers are now exposed as references instead of DVRs.
8. Added a variant element in dataset types to support custom data storage.
9. Added option to specify a custom workspace size when creating a network from a configuration file.
10. Removed deprecated APIs:
NN_get_Detections(YOLO_v2)(batch).vi
NN_get_Detections(YOLO_v2).vi

Optimizations
1. Greatly improved inference time for networks with a large number of layers - up to 2x faster for small models.
2. Optimized training speed and efficiency in GPU mode.
3. Improved performance of the ShortCut layer.
4. Changed NN_Layer_Ref type from DVR to Refnum to reduce overhead and improve latency. This change introduces minor backward compatibility issues.
5. Receptive Field and Stride information has been removed from the NN cluster. This information is now accessible from individual layers.

Bug Fixes
1. Fixed a bug where an error was generated for a missing loss type in inference mode.
2. Resolved an issue with the SWISH activation when training in GPU mode.
3. Corrected Convolution FLOPs calculation for cases where Groups > 1.
4. Fixed an NN_Eval issue where average Recall and F1 scores returned NaN due to missing classes in the test dataset.
5. Fixed a Forward bug in the Activation Layer when Activation = None (Identity) on GPU.
6. Fixed potential training issues in SoftMax and UpSample layers when located on a branch in the network.
7. Fixed memory leakage issues during training.
8. Added validation in the YOLO layer to ensure the number of classes is greater than zero.
9. Fixed error and warning propagation logic during forward and backward passes.
10. Corrected Layer Metric calculations for Batch Normalization and Convolution layers.
11. Improved clarity and completeness of various error messages.
12. Fixed algorithm selection and workspace related issues in Convolutional layers.
13. Added missing configuration parsing for the “noobject_conf_thresh” parameter.
14. Corrected mistakes in the documentation.
15. Removed unused VIs from the installer.
16. UI changes in examples.
17. Moved Object Detection controls to Object Detection API palette.
18. (v9.0.2) Fixed a bug in the Average Pool layer when using global mode with non-square input resolutions.

nahapetn and ngene were contributors to this release


Recent Posts

Deep Learning with LabVIEW

by VIPM Community, 5 years, 2 months ago, 0 , 5
resource