<?xml version="1.0" encoding="UTF-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
    <title>nvidia</title>
    <link rel="self" type="application/atom+xml" href="https://links.biapy.com/guest/tags/146/feed"/>
    <updated>2026-04-18T23:02:04+00:00</updated>
    <id>https://links.biapy.com/guest/tags/146/feed</id>
            <entry>
            <id>https://links.biapy.com/links/12216</id>
            <title type="text"><![CDATA[NVIDIA OpenShell]]></title>
            <link rel="alternate" href="https://docs.nvidia.com/openshell/latest/" />
            <link rel="via" type="application/atom+xml" href="https://links.biapy.com/links/12216"/>
            <author>
                <name><![CDATA[Biapy]]></name>
            </author>
            <summary type="text">
                <![CDATA[OpenShell is the safe, private runtime for autonomous AI agents. 

NVIDIA OpenShell is the safe, private runtime for autonomous AI agents. It provides sandboxed execution environments that protect your data, credentials, and infrastructure. Agents run with exactly the permissions they need and nothing more, governed by declarative policies that prevent unauthorized file access, data exfiltration, and uncontrolled network activity.

- [NVIDIA OpenShell @ GitHub](https://github.com/NVIDIA/OpenShell).]]>
            </summary>
            <updated>2026-03-20T13:48:09+00:00</updated>
        </entry>
            <entry>
            <id>https://links.biapy.com/links/12160</id>
            <title type="text"><![CDATA[NVIDIA NemoClaw]]></title>
            <link rel="alternate" href="https://www.nvidia.com/en-us/ai/nemoclaw/" />
            <link rel="via" type="application/atom+xml" href="https://links.biapy.com/links/12160"/>
            <author>
                <name><![CDATA[Biapy]]></name>
            </author>
            <summary type="text">
                <![CDATA[NVIDIA plugin for secure installation of OpenClaw.
OpenClaw Plugin for OpenShell.

- [NVIDIA NemoClaw @ GitHub](https://github.com/NVIDIA/NemoClaw).

Related contents:

- [Nvidia Software Aims to Bring OpenClaw to the Enterprise @ The Wall Street Journal](https://www.wsj.com/cio-journal/nvidia-software-aims-to-bring-openclaw-to-the-enterprise-7b8e9927).
- [Nvidia NemoClaw promises to run OpenClaw agents securely @ CIO](https://www.cio.com/article/4146545/nvidia-nemoclaw-promises-to-run-openclaw-agents-securely.html).]]>
            </summary>
            <updated>2026-03-18T13:07:43+00:00</updated>
        </entry>
            <entry>
            <id>https://links.biapy.com/links/10980</id>
            <title type="text"><![CDATA[GPU Hot]]></title>
            <link rel="alternate" href="https://psalias2006.github.io/gpu-hot/" />
            <link rel="via" type="application/atom+xml" href="https://links.biapy.com/links/10980"/>
            <author>
                <name><![CDATA[Biapy]]></name>
            </author>
            <summary type="text">
                <![CDATA[Real-time NVIDIA GPU Monitoring.

Real-time NVIDIA GPU monitoring dashboard. Web-based, no SSH required.

- [GPU Hot @ GitHub](https://github.com/psalias2006/gpu-hot).

Related contents:

- [Best Docker Apps of October 2025! @ ServersatHome&amp;#039;s YouTube](https://www.youtube.com/watch?v=dMhUiJqohFU).]]>
            </summary>
            <updated>2025-11-16T16:33:02+00:00</updated>
        </entry>
            <entry>
            <id>https://links.biapy.com/links/10893</id>
            <title type="text"><![CDATA[Murmure]]></title>
            <link rel="alternate" href="https://www.murmure.app/" />
            <link rel="via" type="application/atom+xml" href="https://links.biapy.com/links/10893"/>
            <author>
                <name><![CDATA[Biapy]]></name>
            </author>
            <summary type="text">
                <![CDATA[Privacy-first and free Speech-to-Text.

Murmure is an AI-powered, offline speech-to-text tool designed with Privacy first in mind and powered by NVIDIA Parakeet 🦜. Your voice always stays yours.

- [Murmure @ GitHub](https://github.com/Kieirra/murmure).]]>
            </summary>
            <updated>2026-04-03T17:20:43+00:00</updated>
        </entry>
            <entry>
            <id>https://links.biapy.com/links/48</id>
            <title type="text"><![CDATA[LACT]]></title>
            <link rel="alternate" href="https://github.com/ilya-zlobintsev/LACT" />
            <link rel="via" type="application/atom+xml" href="https://links.biapy.com/links/48"/>
            <author>
                <name><![CDATA[Biapy]]></name>
            </author>
            <summary type="text">
                <![CDATA[Linux GPU Configuration And Monitoring Tool.

This application allows you to control your AMD, Nvidia or Intel GPU on a Linux system.

Related contents:

- [LACT - Le panneau de contrôle GPU qui manquait à Linux @ Korben :fr:](https://korben.info/lact-controle-gpu-amd-linux.html).]]>
            </summary>
            <updated>2026-02-02T09:00:09+00:00</updated>
        </entry>
            <entry>
            <id>https://links.biapy.com/links/80</id>
            <title type="text"><![CDATA[nvidia/parakeet-tdt-0.6b-v2 @ Hugging Face]]></title>
            <link rel="alternate" href="https://huggingface.co/nvidia/parakeet-tdt-0.6b-v2" />
            <link rel="via" type="application/atom+xml" href="https://links.biapy.com/links/80"/>
            <author>
                <name><![CDATA[Biapy]]></name>
            </author>
            <summary type="text">
                <![CDATA[parakeet-tdt-0.6b-v2 is a 600-million-parameter automatic speech recognition (ASR) model designed for high-quality English transcription, featuring support for punctuation, capitalization, and accurate timestamp prediction.

Related contents:

- [Transcribe speech 100x faster and 100x cheaper with open models @ Modal](https://modal.com/blog/fast-cheap-batch-transcription).]]>
            </summary>
            <updated>2025-09-18T05:53:12+00:00</updated>
        </entry>
            <entry>
            <id>https://links.biapy.com/links/637</id>
            <title type="text"><![CDATA[KAI Scheduler]]></title>
            <link rel="alternate" href="https://github.com/NVIDIA/KAI-Scheduler" />
            <link rel="via" type="application/atom+xml" href="https://links.biapy.com/links/637"/>
            <author>
                <name><![CDATA[Biapy]]></name>
            </author>
            <summary type="text">
                <![CDATA[KAI Scheduler is an open source Kubernetes Native scheduler for AI workloads at large scale]]>
            </summary>
            <updated>2025-08-28T17:44:06+00:00</updated>
        </entry>
            <entry>
            <id>https://links.biapy.com/links/1012</id>
            <title type="text"><![CDATA[NVIDIA PhysX]]></title>
            <link rel="alternate" href="https://nvidia-omniverse.github.io/PhysX/" />
            <link rel="via" type="application/atom+xml" href="https://links.biapy.com/links/1012"/>
            <author>
                <name><![CDATA[Biapy]]></name>
            </author>
            <summary type="text">
                <![CDATA[NVIDIA PhysX SDK.

This repository contains source releases of the PhysX, Flow, and Blast SDKs used in NVIDIA Omniverse.

- [NVIDIA PhysX @ GitHub](https://github.com/NVIDIA-Omniverse/PhysX).]]>
            </summary>
            <updated>2025-08-28T18:46:42+00:00</updated>
        </entry>
            <entry>
            <id>https://links.biapy.com/links/1240</id>
            <title type="text"><![CDATA[NVIDIA Dynamo]]></title>
            <link rel="alternate" href="https://developer.nvidia.com/dynamo" />
            <link rel="via" type="application/atom+xml" href="https://links.biapy.com/links/1240"/>
            <author>
                <name><![CDATA[Biapy]]></name>
            </author>
            <summary type="text">
                <![CDATA[A Datacenter Scale Distributed Inference Serving Framework.

NVIDIA Dynamo is a high-throughput low-latency inference framework designed for serving generative AI and reasoning models in multi-node distributed environments. Dynamo is designed to be inference engine agnostic (supports TRT-LLM, vLLM, SGLang or others) and captures LLM-specific capabilities.

- [Dynamo @ GitHub](https://github.com/ai-dynamo/dynamo).

Related contents:

- [A closer look at Dynamo, Nvidia&amp;#039;s &amp;#039;operating system&amp;#039; for AI inference @ The register](https://www.theregister.com/2025/03/23/nvidia_dynamo/).]]>
            </summary>
            <updated>2025-08-28T19:23:04+00:00</updated>
        </entry>
            <entry>
            <id>https://links.biapy.com/links/1970</id>
            <title type="text"><![CDATA[GPU Glossary]]></title>
            <link rel="alternate" href="https://modal.com/gpu-glossary/readme" />
            <link rel="via" type="application/atom+xml" href="https://links.biapy.com/links/1970"/>
            <author>
                <name><![CDATA[Biapy]]></name>
            </author>
            <summary type="text">
                <![CDATA[We wrote this glossary to solve a problem we ran into working with GPUs here at Modal : the documentation is fragmented, making it difficult to connect concepts at different levels of the stack, like Streaming Multiprocessor Architecture , Compute Capability , and nvcc compiler flags .]]>
            </summary>
            <updated>2025-08-28T21:24:22+00:00</updated>
        </entry>
            <entry>
            <id>https://links.biapy.com/links/3240</id>
            <title type="text"><![CDATA[exo]]></title>
            <link rel="alternate" href="https://github.com/exo-explore/exo" />
            <link rel="via" type="application/atom+xml" href="https://links.biapy.com/links/3240"/>
            <author>
                <name><![CDATA[Biapy]]></name>
            </author>
            <summary type="text">
                <![CDATA[Run your own AI cluster at home with everyday devices 📱💻 🖥️⌚ 

Forget expensive NVIDIA GPUs, unify your existing devices into one powerful GPU: iPhone, iPad, Android, Mac, Linux, pretty much any device!

Related contents:

- [Exo - Pour créer un super cluster IA avec tous les appareils qui trainent chez vous @ Korben :fr:](https://korben.info/exo-cluster-ia-distribue-appareils-gpu.html).
- [I built an AI supercomputer with 5 Mac Studios @ NetworkChuck&amp;#039;s YouTube](https://www.youtube.com/watch?v=Ju0ndy2kwlw).]]>
            </summary>
            <updated>2026-01-15T07:43:46+00:00</updated>
        </entry>
            <entry>
            <id>https://links.biapy.com/links/3478</id>
            <title type="text"><![CDATA[TensorRT SDK]]></title>
            <link rel="alternate" href="https://developer.nvidia.com/tensorrt" />
            <link rel="via" type="application/atom+xml" href="https://links.biapy.com/links/3478"/>
            <author>
                <name><![CDATA[Biapy]]></name>
            </author>
            <summary type="text">
                <![CDATA[NVIDIA® TensorRT™ is an ecosystem of APIs for high-performance deep learning inference. TensorRT includes an inference runtime and model optimizations that deliver low latency and high throughput for production applications. The TensorRT ecosystem includes TensorRT, TensorRT-LLM, TensorRT Model Optimizer, and TensorRT Cloud.

- [TensorRT Open Source Software @ GitHub](https://github.com/NVIDIA/TensorRT).]]>
            </summary>
            <updated>2025-08-29T01:37:16+00:00</updated>
        </entry>
            <entry>
            <id>https://links.biapy.com/links/5629</id>
            <title type="text"><![CDATA[nvitop]]></title>
            <link rel="alternate" href="https://github.com/XuehaiPan/nvitop" />
            <link rel="via" type="application/atom+xml" href="https://links.biapy.com/links/5629"/>
            <author>
                <name><![CDATA[Biapy]]></name>
            </author>
            <summary type="text">
                <![CDATA[An interactive NVIDIA-GPU process viewer and beyond, the one-stop solution for GPU process management.]]>
            </summary>
            <updated>2025-08-29T07:36:12+00:00</updated>
        </entry>
            <entry>
            <id>https://links.biapy.com/links/5641</id>
            <title type="text"><![CDATA[Headless Steam Service]]></title>
            <link rel="alternate" href="https://github.com/Steam-Headless/docker-steam-headless" />
            <link rel="via" type="application/atom+xml" href="https://links.biapy.com/links/5641"/>
            <author>
                <name><![CDATA[Biapy]]></name>
            </author>
            <summary type="text">
                <![CDATA[A Headless Steam Docker image supporting NVIDIA GPU and accessible via Web UI.
Play your games in the browser with audio. Connect another device and use it with Steam Remote Play. Easily deploy a Steam Docker instance in seconds.]]>
            </summary>
            <updated>2025-08-29T07:37:11+00:00</updated>
        </entry>
            <entry>
            <id>https://links.biapy.com/links/8102</id>
            <title type="text"><![CDATA[vramfs]]></title>
            <link rel="alternate" href="https://github.com/Overv/vramfs" />
            <link rel="via" type="application/atom+xml" href="https://links.biapy.com/links/8102"/>
            <author>
                <name><![CDATA[Biapy]]></name>
            </author>
            <summary type="text">
                <![CDATA[Unused RAM is wasted RAM, so why not put some of that VRAM in your graphics card to work?

vramfs is a utility that uses the FUSE library to create a file system in VRAM. The idea is pretty much the same as a ramdisk, except that it uses the video RAM of a discrete graphics card to store files. It is not intented for serious use, but it does actually work fairly well, especially since consumer GPUs with 4GB or more VRAM are now available.

On the developer&amp;#039;s system, the continuous read performance is ~2.4 GB/s and write performance 2.0 GB/s, which is about 1/3 of what is achievable with a ramdisk. That is already decent enough for a device not designed for large data transfers to the host, but future development should aim to get closer to the PCI-e bandwidth limits. See the benchmarks section for more info.]]>
            </summary>
            <updated>2025-08-29T14:28:00+00:00</updated>
        </entry>
    </feed>
