Sign Up

CuLab - GPU Accelerated by Ngene - Toolkit for LabVIEW Download

CuLab - GPU Accelerated Toolkit

D Discussion Watch * 8 ↓388
 screenshot
Version4.1.1.77
ReleasedApr 09, 2025
Publisher Ngene
License Ngene Custom
LabVIEW VersionLabVIEW x64>=20.0
Operating System Windows x64
Project links Homepage   Documentation   Repository   Discussion

Description

CuLab is a GPU acceleration toolkit for LabVIEW, designed to simplify complex computations on Nvidia GPUs. It provides a broad API to accelerate a wide range of functions, including mathematical operations, linear algebra, signal generation, signal processing (FFT/IFFT, correlation, convolution, resampling), and array manipulation directly on the GPU. CuLab supports tensors (arrays) across all numeric types and dimensions (0D to 4D), making it highly adaptable to various data processing tasks. With its user-friendly design, CuLab enables LabVIEW users to seamlessly accelerate their applications on Nvidia GPUs.

Release Notes

4.1.1.77 (Apr 09, 2025)

V4.1.1
General Description
This update introduces new functionalities, performance optimizations, and bug fixes while maintaining full backward compatibility.

New Features
1. Numeric Subpalette:
• CU_Reduce.vi: Enables batch mode reduction.
• Supported Data Types: I32, U32, I64, U64, SGL, DBL, CDB, CSG.
• Dimensionality: 2D.
• Operations: Sum, Product, Min, Max.
• Preprocessing: None, Abs, Sqr, Sqrt.
2. Array Subpalette:
• CU_Array_Permute.vi: Permutes tensor dimensions.
• Supported Data Types: All.
• Dimensionalities: 3D, 4D.
3. Statistics Subpalette:
• CU_Mean_Batch.vi & CU_RMS_Batch.vi: Perform batch mode mean and RMS calculations. Available since v4.0.1
• Supported Data Types: SGL, DBL, CSG, CDB.
• Dimensionality: 2D.
4. CU_Array_Reshape_to_TxD: Now supports an in-place option and allows one target dimension to be inferred automatically.

Optimizations
1. CU_Transpose_2D_Array.vi: Significantly improved execution speed.
2. General Performance: Optimized multiple functions by using asynchronous copy operation

Bug Fixes & Enhancements
1. Fixed a bug in CU_Initialize_Array.vi when initializing large tensors with complex data types.
2. CU_Convolution.vi now returns an error when called in in-place mode.
3. Improved error messages to display the full call chain.
4. Resolved an issue where some VIs were broken after installation.
5. Enhanced dimension validation with clearer error messages for mismatched dimensions.
6. Removed unnecessary DLLs from the package installer.
7. Improved error handling in GPU Info and CU_FIR_Filter functions.
8. Fixed unnecessary host destination wire connections in example VIs calling CU_Tensor_Pull.vi.
9. Various minor fixes and stability improvements.

Cosmetic Changes
1. Updated icons for the following API VIs (now indicating default in-place operations):
• CU_Replace_Array_Element.vi
• CU_Replace_Array_Element_by_Index_Batch.vi
• CU_Replace_Array_Subset_1D_1D.vi
• CU_Replace_Array_Subset_2D_2D.vi
• CU_Replace_Array_Subset_2D_1D.vi


Download Package

Versions
All Contributors

  Post an Idea   Post a Resource

Recent Posts

Can waveform generation be included as simple trig and linear operations like ramp and sine pattern
Many RF DSP maths require simple signals to perform operations. Making those signals takes horsepow…

by norm!, 2 years, 7 months ago, 1 , 2
suggestion
Can complex number library be fleshed out with polar transforms?
Complex to polar transforms are done a TON in RF DSP. I'd love to see the impact on some core algor…

by norm!, 2 years, 7 months ago, 1 , 1
suggestion