Prevent PyTorch's `CUDA error: out of memory` in just 1 line of code.
-
Updated
May 31, 2025 - Python
Prevent PyTorch's `CUDA error: out of memory` in just 1 line of code.
SSD-based object and text detection with Keras, SSD, DSOD, TextBoxes, SegLink, TextBoxes++, CRNN
Distributed training (multi-node) of a Transformer model
🎯 Gradient Accumulation for TensorFlow 2
TorchHandle makes your PyTorch development more efficient and make you use PyTorch more comfortable
Gradient accumulation on tf.estimator
A simple implementation of Multi-passage BERT
tensorflow2-keras gradient accumulation
This project aims to help people implement tensorflow model pipelines quickly for different nlp tasks.
🎯 Production-ready implementation of video prediction models using PyTorch. Features Enhanced ConvLSTM with temporal attention, PredRNN with spatiotemporal memory, and Transformer-based architecture.
Gradient Accumulation with Tensorflow2.x
AlzMRI-Net: Classify Alzheimer's stages from MRI scans.
Comprehensive PyTorch Lightning framework featuring 20+ educational notebooks, advanced ML patterns, and production-ready workflows. Covers vision, NLP, tabular, and time series domains with distributed training, mixed precision, custom loops, and deployment pipelines. Complete with synthetic data generators and testing.
Implementation of Gradient Accumulation for low-memory language modelling transformer fine tuning.
Classifying images of flowers into 17 categories using EfficientNet-B0 and PyTorch.
Add a description, image, and links to the gradient-accumulation topic page so that developers can more easily learn about it.
To associate your repository with the gradient-accumulation topic, visit your repo's landing page and select "manage topics."