<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>ubuntu归档 - Liao&#039;s blog</title>
	<atom:link href="https://www.laobaiblog.top/tag/ubuntu/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.laobaiblog.top/tag/ubuntu/</link>
	<description>路漫漫其修远兮，吾将上下而求索</description>
	<lastBuildDate>Tue, 01 Apr 2025 13:40:41 +0000</lastBuildDate>
	<language>zh-Hans</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>

 
	<item>
		<title>Ubuntu安装Ollama,Huggingface下载GGUF大模型手动导入Ollama</title>
		<link>https://www.laobaiblog.top/2025/04/01/ollama-gguf/</link>
		
		<dc:creator><![CDATA[大白]]></dc:creator>
		<pubDate>Tue, 01 Apr 2025 07:57:35 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[分享]]></category>
		<category><![CDATA[huggingface]]></category>
		<category><![CDATA[linux]]></category>
		<category><![CDATA[ollama]]></category>
		<category><![CDATA[ubuntu]]></category>
		<guid isPermaLink="false">https://www.laobaiblog.top/?p=546</guid>

					<description><![CDATA[<p>在 Ubuntu 24.04 上部署 Ollama + DeepSeek，可以构建一个安全、可控的本 &#8230;</p>
<p><a href="https://www.laobaiblog.top/2025/04/01/ollama-gguf/">Ubuntu安装Ollama,Huggingface下载GGUF大模型手动导入Ollama</a>最先出现在<a href="https://www.laobaiblog.top">Liao&#039;s blog</a>。</p>
]]></description>
										<content:encoded><![CDATA[<blockquote><p>
  在 Ubuntu 24.04 上部署 Ollama + DeepSeek，可以构建一个安全、可控的本地 AI 知识库系统，适用于企业文档管理、个人学习助手等场景。本教程将详细介绍安装配置步骤，帮助用户快速搭建属于自己的 AI 知识库，实现高效信息检索与智能交互。
</p></blockquote>
<h3>一、Ollama 下载与部署</h3>
<p>Ollama 是一个开源项目，可以使用官网推荐的脚本方式安装，也可以直接访问 github 下载 release 包后进行手动安装。我这里选择手动安装<code>ollama-linux-amd64.tgz</code></p>
<h4>1. 自动安装</h4>
<pre><code class="language-shell line-numbers">curl -fsSL https://ollama.com/install.sh | sh
</code></pre>
<h4>2. 手动安装</h4>
<p><a class="wp-editor-md-post-content-link" href="https://www.laobaiblog.top/wp-content/uploads/2025/04/wp_editor_md_f15c1e22299dd452b3c75bd3294b5074.jpg"><img decoding="async" src="https://www.laobaiblog.top/wp-content/uploads/2025/04/wp_editor_md_f15c1e22299dd452b3c75bd3294b5074.jpg" alt="" /></a></p>
<pre><code class="language-shell line-numbers">#最新版本在github上查看
wget https://github.com/ollama/ollama/releases/download/v0.6.3/ollama-linux-amd64.tgz
#可以使用以下命令进行解压缩并拷贝到系统目录中：
sudo tar -C /usr -zxvf ollama-linux-amd64.tgz
#这样就直接部署完成了
ollama -v
#显示 ollama version is 0.6.3 查看安装版本进行验证。
</code></pre>
<h4>3. 创建 Ollama 用户及系统服务</h4>
<p>出于安全性、隔离性和系统管理的考虑，需要创建 ollama 用户，执行以下命令：</p>
<pre><code class="language-shell line-numbers"># 新增用户
sudo useradd -r -s /bin/false -U -m -d /usr/share/ollama ollama
# 修改用户信息
sudo usermod -a -G ollama <span class="katex math inline">(whoami)
创建系统服务 service 文件：

# 编辑文件
sudo vi /etc/systemd/system/ollama.service
# 文件内容
[Unit]
Description=Ollama Service
After=network-online.target

[Service]
ExecStart=/usr/bin/ollama serve
User=ollama
Group=ollama
Restart=always
RestartSec=3
Environment="PATH=</span>PATH"
Environment="OLLAMA_HOST=0.0.0.0:11434"
Environment="OLLAMA_ORIGINS=*"

[Install]
WantedBy=default.target
</code></pre>
<h4>4. 配置重载及开机自启</h4>
<pre><code class="language-shell line-numbers"># 重载配置
sudo systemctl daemon-reload
# 启动服务
sudo systemctl start ollama.service
# 查看服务状态
sudo systemctl status ollama.service
# 设置服务开机自启动
sudo systemctl enable ollama.service
</code></pre>
<h3>二、Huggingface模型下载</h3>
<p><a class="wp-editor-md-post-content-link" href="https://huggingface.co/">Huggingface官网下载（科学上网）</a></p>
<ol>
<li>ollama因为网络不稳定的原因，所以在这里没有ollama pull XXX模型，在这里根据自身显卡规格选择想要运行的模型，用这两个模型作为参考：</li>
</ol>
<ul>
<li><a class="wp-editor-md-post-content-link" href="https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-32B-GGUF/tree/main">unsloth/DeepSeek-R1-Distill-Qwen-32B-GGUF</a></li>
</ul>
<p><a class="wp-editor-md-post-content-link" href="https://www.laobaiblog.top/wp-content/uploads/2025/04/wp_editor_md_cf5a1b7c1d1cd29251b246290255c3cf.jpg"><img decoding="async" src="https://www.laobaiblog.top/wp-content/uploads/2025/04/wp_editor_md_cf5a1b7c1d1cd29251b246290255c3cf.jpg" alt="" /></a></p>
<ul>
<li><a class="wp-editor-md-post-content-link" href="https://huggingface.co/Qwen/QwQ-32B-GGUF">Qwen/QwQ-32B-GGUF</a></li>
</ul>
<p><a class="wp-editor-md-post-content-link" href="https://www.laobaiblog.top/wp-content/uploads/2025/04/wp_editor_md_8add52e0c967508a9cc7032be3132d8c.jpg"><img decoding="async" src="https://www.laobaiblog.top/wp-content/uploads/2025/04/wp_editor_md_8add52e0c967508a9cc7032be3132d8c.jpg" alt="" /></a></p>
<p>下载并上传到服务器<code>/data/models/XXX</code>模型目录下</p>
<pre><code class="language-shell line-numbers">#参数解析
DeepSeek：DeepSeek 发布的模型文件
R1：深度思考模型
Distill：模型蒸馏
Qwen：与阿里巴巴推出的通义千问系列模型相关
32B：32 Billion，即 320 亿参数的版本
Q4：4-bit 量化
K：量化分组，是量化算法中的一种优化技术
M：中等量化粒度
gguf：GPT-Generated Unified Format，是一种专为大型模型设计的二进制文件存储格式
</code></pre>
<h3>三、Ollama手动加载模型运行</h3>
<p>把<strong><code>DeepSeek-R1-Distill-Qwen-32B-Q4_K_M.gguf</code></strong>及<strong><code>qwq-32b-q4_k_m.gguf</code></strong>两个模型文件放到对应的<strong><code>deepseek</code></strong>及<strong><code>qwq</code></strong>目录下。</p>
<p><a class="wp-editor-md-post-content-link" href="https://www.laobaiblog.top/wp-content/uploads/2025/04/wp_editor_md_84e92d9b67262d046758b6409a1d931c.jpg"><img decoding="async" src="https://www.laobaiblog.top/wp-content/uploads/2025/04/wp_editor_md_84e92d9b67262d046758b6409a1d931c.jpg" alt="" /></a></p>
<h5>1. 在DeepSeek模型文件同级目录下创建文件<code>ollama-deepseek</code>并写入以下内容：</h5>
<pre><code class="language-shell line-numbers">FROM ./DeepSeek-R1-Distill-Qwen-32B-Q4_K_M.gguf
</code></pre>
<pre><code class="language-shell line-numbers">#然后在模型文件目录执行以下命令导入模型文件：
ollama create DeepSeek-R1-Distill-Qwen-32B-GGUF -f ./ollama-deepseek
</code></pre>
<h5>2. 在Qwq模型文件同级目录下创建文件<code>ollama-qwq</code>并写入以下内容：</h5>
<pre><code class="language-shell line-numbers">FROM ./qwq-32b-q4_k_m.gguf
</code></pre>
<pre><code class="language-shell line-numbers">#然后在模型文件目录执行以下命令导入模型文件：
ollama create QwQ-32B-GGUF -f ./ollama-qwq
</code></pre>
<h3>四、查看模型与运行模型</h3>
<p>可以使用命令<code>ollama list</code>查看已加载的模型列表：</p>
<p><a class="wp-editor-md-post-content-link" href="https://www.laobaiblog.top/wp-content/uploads/2025/04/wp_editor_md_53dcc1feda2e1a8c640041ba809b3a1b.jpg"><img decoding="async" src="https://www.laobaiblog.top/wp-content/uploads/2025/04/wp_editor_md_53dcc1feda2e1a8c640041ba809b3a1b.jpg" alt="" /></a></p>
<p>然后通过命令<code>ollama run &lt;model-name&gt;</code>就可以运行指定的模型了</p>
<p><a class="wp-editor-md-post-content-link" href="https://www.laobaiblog.top/wp-content/uploads/2025/04/wp_editor_md_83e2136af7bdf2b143a48fc317a9d51a.jpg"><img decoding="async" src="https://www.laobaiblog.top/wp-content/uploads/2025/04/wp_editor_md_83e2136af7bdf2b143a48fc317a9d51a.jpg" alt="" /></a></p>
<p><a href="https://www.laobaiblog.top/2025/04/01/ollama-gguf/">Ubuntu安装Ollama,Huggingface下载GGUF大模型手动导入Ollama</a>最先出现在<a href="https://www.laobaiblog.top">Liao&#039;s blog</a>。</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Docker compose安装配置向量数据库Milvus，配置可视化Attu</title>
		<link>https://www.laobaiblog.top/2025/04/01/docker-compose%e5%ae%89%e8%a3%85%e9%85%8d%e7%bd%ae%e5%90%91%e9%87%8f%e6%95%b0%e6%8d%ae%e5%ba%93milvus%ef%bc%8c%e9%85%8d%e7%bd%ae%e5%8f%af%e8%a7%86%e5%8c%96attu/</link>
		
		<dc:creator><![CDATA[大白]]></dc:creator>
		<pubDate>Tue, 01 Apr 2025 06:49:19 +0000</pubDate>
				<category><![CDATA[Docker]]></category>
		<category><![CDATA[Linux]]></category>
		<category><![CDATA[分享]]></category>
		<category><![CDATA[docker]]></category>
		<category><![CDATA[docker compose]]></category>
		<category><![CDATA[milvus]]></category>
		<category><![CDATA[ubuntu]]></category>
		<category><![CDATA[向量数据库]]></category>
		<category><![CDATA[开源]]></category>
		<guid isPermaLink="false">https://www.laobaiblog.top/?p=544</guid>

					<description><![CDATA[<p>介绍Milvus向量数据库的安装过程，包括创建工作目录、下载docker-compose.yml文件 &#8230;</p>
<p><a href="https://www.laobaiblog.top/2025/04/01/docker-compose%e5%ae%89%e8%a3%85%e9%85%8d%e7%bd%ae%e5%90%91%e9%87%8f%e6%95%b0%e6%8d%ae%e5%ba%93milvus%ef%bc%8c%e9%85%8d%e7%bd%ae%e5%8f%af%e8%a7%86%e5%8c%96attu/">Docker compose安装配置向量数据库Milvus，配置可视化Attu</a>最先出现在<a href="https://www.laobaiblog.top">Liao&#039;s blog</a>。</p>
]]></description>
										<content:encoded><![CDATA[<blockquote><p>
  介绍Milvus向量数据库的安装过程，包括创建工作目录、下载docker-compose.yml文件、配置attu可视化面板和修改安全设置。
</p></blockquote>
<h3>环境准备</h3>
<p><a class="wp-editor-md-post-content-link" href="https://www.laobaiblog.top/wp-content/uploads/2025/04/wp_editor_md_5f5908886322910b08bd5deb0f4862ee.jpg"><img decoding="async" src="https://www.laobaiblog.top/wp-content/uploads/2025/04/wp_editor_md_5f5908886322910b08bd5deb0f4862ee.jpg" alt="" /></a></p>
<ul>
<li>docker版本：28.0.4</li>
<li>docker compose版本：v2.34.0</li>
</ul>
<h3>Milvus向量数据库简介</h3>
<p><strong>Milvus是一款开源的向量数据库，它专为AI应用设计，用于管理和检索海量的特征向量</strong>。Milvus的优势主要包括：</p>
<ul>
<li>高效的向量检索性能：Milvus采用了多种先进的索引算法，如IVF, HNSW, ANNOY等，能够在大规模数据集上实现高效的近似最近邻搜索。</li>
<li>易于扩展和维护：Milvus支持水平和垂直扩展，能够适应不断增长的数据规模和查询需求。它的分布式架构使得数据存储和计算能力可以灵活扩展。</li>
<li>多种数据持久化选项：Milvus支持SSD, HDD等多种存储介质，并且可以与多种持久化存储解决方案集成，如MinIO, S3等。</li>
<li>丰富的数据接口：Milvus提供了Python, Java, RESTful等多种语言的SDK，方便开发者在不同的应用场景中使用。</li>
<li>强大的可扩展性和兼容性：支持各种大小和类型的向量数据，可以与现有的数据处理和机器学习工作流程无缝集成。</li>
<li>容器化和云原生支持：支持Docker和Kubernetes，方便在云环境中部署和管理。</li>
<li>开源社区支持：作为一个开源项目，Milvus拥有活跃的社区，不断有新的功能和改进被加入。</li>
</ul>
<p>Milvus适用于各种需要高效向量检索的应用场景，如推荐系统、图像检索、自然语言处理等。由于其高效、易用和可扩展的特性，Milvus在AI应用开发中越来越受欢迎。</p>
<h3>安装Milvus</h3>
<p><strong>1. 创建工作目录（自定义）</strong></p>
<pre><code class="language-shell line-numbers"># 切换到root目录
cd /root
# 新建一个名为milvus的目录用于存放数据 目录名称可以自定义
mkdir milvus
# 进入到新建的目录
cd milvus
</code></pre>
<p><strong>2. 下载并编辑docker-compose.yml</strong></p>
<p><a class="wp-editor-md-post-content-link" href="https://github.com/milvus-io/milvus/releases/download/v2.5.7/milvus-standalone-docker-compose-gpu.yml">Github查看最新版本</a>，选择带有GPU的yml配置文件下载（<strong>nvidia显卡</strong>）。也可以借鉴<a class="wp-editor-md-post-content-link" href="https://milvus.io/docs/zh/install_standalone-docker-compose.md">官网指南</a></p>
<p><a class="wp-editor-md-post-content-link" href="https://www.laobaiblog.top/wp-content/uploads/2025/04/wp_editor_md_6f59b57bfdf0a0011b6760617fe0f3a4.jpg"><img decoding="async" src="https://www.laobaiblog.top/wp-content/uploads/2025/04/wp_editor_md_6f59b57bfdf0a0011b6760617fe0f3a4.jpg" alt="" /></a></p>
<p><span id="more-544"></span><br />
<a class="wp-editor-md-post-content-link" href="https://www.laobaiblog.top/wp-content/uploads/2025/04/wp_editor_md_0eeb45cac25f24508d7879025e257bfb.jpg"><img decoding="async" src="https://www.laobaiblog.top/wp-content/uploads/2025/04/wp_editor_md_0eeb45cac25f24508d7879025e257bfb.jpg" alt="" /></a></p>
<p><strong>3. 下载milvus.yml文件</strong></p>
<p>该文件是milvus的配置文件，容器中内置，但如果要修改配置，需要单独下载，这里为了做访问控制，就需要修改配置。</p>
<pre><code class="language-shell line-numbers"># 注意改成自己对应的milvus版本号
wget https://raw.githubusercontent.com/milvus-io/milvus/v2.5.7/configs/milvus.yaml
</code></pre>
<p>下载好后，确保该文件位于milvus工作目录下，然后编辑该文件，<strong>找到其中的<code>common &gt; security &gt; authorizationEnabled</code>并将其设置为<code>true</code></strong>。</p>
<p><a class="wp-editor-md-post-content-link" href="https://www.laobaiblog.top/wp-content/uploads/2025/04/wp_editor_md_f74e87dac045cf2c7493aaa799ff253e.jpg"><img decoding="async" src="https://www.laobaiblog.top/wp-content/uploads/2025/04/wp_editor_md_f74e87dac045cf2c7493aaa799ff253e.jpg" alt="" /></a></p>
<p><strong>3. 下载milvus-standalone-docker-compose-gpu.yml文件，在其中添加attu可视化面板的容器。并修改docker-compose.yml做资源映射</strong></p>
<pre><code class="language-shell line-numbers">version: '3.5'
services:
  etcd:
    container_name: milvus-etcd
    image: quay.io/coreos/etcd:v3.5.18
    environment:
      - ETCD_AUTO_COMPACTION_MODE=revision
      - ETCD_AUTO_COMPACTION_RETENTION=1000
      - ETCD_QUOTA_BACKEND_BYTES=4294967296
      - ETCD_SNAPSHOT_COUNT=50000
    volumes:
      - <span class="katex math inline">{DOCKER_VOLUME_DIRECTORY:-.}/volumes/etcd:/etcd
    command: etcd -advertise-client-urls=http://etcd:2379 -listen-client-urls http://0.0.0.0:2379 --data-dir /etcd
    healthcheck:
      test: ["CMD", "etcdctl", "endpoint", "health"]
      interval: 30s
      timeout: 20s
      retries: 3

  minio:
    container_name: milvus-minio
    image: minio/minio:RELEASE.2023-03-20T20-16-18Z
    environment:
      MINIO_ACCESS_KEY: minioadmin
      MINIO_SECRET_KEY: minioadmin
    ports:
      - "9001:9001"
      - "9000:9000"
    volumes:
      -</span>{DOCKER_VOLUME_DIRECTORY:-.}/volumes/minio:/minio_data
    command: minio server /minio_data --console-address ":9001"
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
      interval: 30s
      timeout: 20s
      retries: 3

  standalone:
    container_name: milvus-standalone
    image: milvusdb/milvus:v2.5.7-gpu
    command: ["milvus", "run", "standalone"]
    security_opt:
    - seccomp:unconfined
    environment:
      ETCD_ENDPOINTS: etcd:2379
      MINIO_ADDRESS: minio:9000
    volumes:
      - <span class="katex math inline">{DOCKER_VOLUME_DIRECTORY:-.}/volumes/milvus:/var/lib/milvus
      # 新增下面这一行来实现配置文件的映射
      -</span>{DOCKER_VOLUME_DIRECTORY:-.}/milvus.yaml:/milvus/configs/milvus.yaml
    ports:
      - "19530:19530"
      - "9091:9091"
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              capabilities: ["gpu"]
              device_ids: ["0"]
    depends_on:
      - "etcd"
      - "minio"

# 在原docker-compose文件的这个位置添加下面这个attu容器，注意版本号和行前空格。
  attu:
    container_name: attu
    image: zilliz/attu:v2.5.6
    environment:
      MILVUS_URL: milvus-standalone:19530
    ports:
      - "8000:3000"  # 外部端口8000可以自定义
    depends_on:
      - "standalone"

networks:
  default:
    name: milvus
</code></pre>
<h3>启动Mlivus</h3>
<p><a class="wp-editor-md-post-content-link" href="https://www.laobaiblog.top/wp-content/uploads/2025/04/wp_editor_md_e450eccd322cd2343c926939c6f070e3.jpg"><img decoding="async" src="https://www.laobaiblog.top/wp-content/uploads/2025/04/wp_editor_md_e450eccd322cd2343c926939c6f070e3.jpg" alt="" /></a></p>
<pre><code class="language-shell line-numbers"># 安装目录下运行命令
# 拉取镜像
docker-compose pull
# 启动容器
docker-compose up -d
# 查看启动状态（健康状态）
docker-compose ps -a
# 停止容器
docker-compose down
</code></pre>
<p>放开端口：连接数据库需要放开19530端口，这是milvus的默认端口，可在docker-compose.yml中修改。访问可视化面板放开8000端口（刚才自己设置的）做反向代理的话可以不用放开此端口。</p>
<h3>验证效果</h3>
<p>访问可视化面板并修改密码：<strong>http://ip:8000</strong></p>
<p><strong>默认账号：root<br />
默认密码：Milvus</strong></p>
<p><a class="wp-editor-md-post-content-link" href="https://www.laobaiblog.top/wp-content/uploads/2025/04/wp_editor_md_bb1b7a1216bf2fd642e12ed929bf989c.jpg"><img decoding="async" src="https://www.laobaiblog.top/wp-content/uploads/2025/04/wp_editor_md_bb1b7a1216bf2fd642e12ed929bf989c.jpg" alt="" /></a></p>
<p><a href="https://www.laobaiblog.top/2025/04/01/docker-compose%e5%ae%89%e8%a3%85%e9%85%8d%e7%bd%ae%e5%90%91%e9%87%8f%e6%95%b0%e6%8d%ae%e5%ba%93milvus%ef%bc%8c%e9%85%8d%e7%bd%ae%e5%8f%af%e8%a7%86%e5%8c%96attu/">Docker compose安装配置向量数据库Milvus，配置可视化Attu</a>最先出现在<a href="https://www.laobaiblog.top">Liao&#039;s blog</a>。</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Ubuntu24.04 TLS桌面版安装Nvidia显卡驱动</title>
		<link>https://www.laobaiblog.top/2025/03/31/ubuntu24-04-tls%e6%a1%8c%e9%9d%a2%e7%89%88%e5%ae%89%e8%a3%85nvidia%e6%98%be%e5%8d%a1%e9%a9%b1%e5%8a%a8/</link>
		
		<dc:creator><![CDATA[大白]]></dc:creator>
		<pubDate>Mon, 31 Mar 2025 09:58:58 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[分享]]></category>
		<category><![CDATA[ai]]></category>
		<category><![CDATA[deepseek]]></category>
		<category><![CDATA[gcc]]></category>
		<category><![CDATA[nvidia]]></category>
		<category><![CDATA[ubuntu]]></category>
		<guid isPermaLink="false">https://www.laobaiblog.top/?p=525</guid>

					<description><![CDATA[<p>在经历无数次重装系统以及网上教程参差不齐的踩坑之路之后，终于找到了行之有效的方法。 一、硬件环境 硬 &#8230;</p>
<p><a href="https://www.laobaiblog.top/2025/03/31/ubuntu24-04-tls%e6%a1%8c%e9%9d%a2%e7%89%88%e5%ae%89%e8%a3%85nvidia%e6%98%be%e5%8d%a1%e9%a9%b1%e5%8a%a8/">Ubuntu24.04 TLS桌面版安装Nvidia显卡驱动</a>最先出现在<a href="https://www.laobaiblog.top">Liao&#039;s blog</a>。</p>
]]></description>
										<content:encoded><![CDATA[<blockquote><p>
  在经历无数次重装系统以及网上教程参差不齐的踩坑之路之后，终于找到了行之有效的方法。
</p></blockquote>
<p><a class="wp-editor-md-post-content-link" href="https://www.laobaiblog.top/wp-content/uploads/2025/03/wp_editor_md_5a6cc48511a7c1850a7735fdc8e6ba96.jpg"><img decoding="async" src="https://www.laobaiblog.top/wp-content/uploads/2025/03/wp_editor_md_5a6cc48511a7c1850a7735fdc8e6ba96.jpg" alt="" /></a></p>
<h3>一、硬件环境</h3>
<table>
<thead>
<tr>
<th>硬件</th>
<th>型号</th>
</tr>
</thead>
<tbody>
<tr>
<td>CPU</td>
<td>i7-14700K</td>
</tr>
<tr>
<td>显卡</td>
<td>七彩虹5090D ADOC 32G</td>
</tr>
<tr>
<td>内存</td>
<td>金士顿 128G</td>
</tr>
</tbody>
</table>
<h3>二、软件环境</h3>
<table>
<thead>
<tr>
<th>规格</th>
<th>版本</th>
</tr>
</thead>
<tbody>
<tr>
<td>系统</td>
<td><a class="wp-editor-md-post-content-link" href="https://cn.ubuntu.com/download">Ubuntu24.04 TLS 桌面版</a></td>
</tr>
<tr>
<td><strong>内核版本</strong></td>
<td><strong>6.13.8-061308-generic</strong></td>
</tr>
<tr>
<td><strong>gcc版本</strong></td>
<td><strong>gcc 14.2.0</strong> (Ubuntu 14.2.0-4ubuntu2~24.04)</td>
</tr>
</tbody>
</table>
<h3>2.1内核升级</h3>
<h4>2.1.1  在进行内核升级之前，建议更新系统中的所有软件包，以确保兼容性：</h4>
<pre><code class="language-shell line-numbers">sudo apt update
sudo apt upgrade -y
</code></pre>
<h4>2.1.2 下载内核包 原装6.11内核版本必须升级到6.13版本以上，gcc版本升级到14</h4>
<ul>
<li><strong>下载地址</strong>：</li>
<li><a class="wp-editor-md-post-content-link" href="https://kernel.ubuntu.com/mainline/v6.13.8/amd64/">kernel.ubuntu.com</a>，<strong>需要科学上网</strong>。</p>
</li>
<li>
<p><a class="wp-editor-md-post-content-link" href="https://pan.quark.cn/s/e79b0f21c268">夸克网盘：<strong>rmuW</strong></a></p>
</li>
</ul>
<pre><code class="language-shell line-numbers">1. linux-headers-6.13.8-061308_6.13.8-061308.202503222044_all.deb
2. linux-headers-6.13.8-061308-generic_6.13.8-061308.202503222044_amd64.deb
3. linux-image-unsigned-6.13.8-061308-generic_6.13.8-061308.202503222044_amd64.deb
4. linux-modules-6.13.8-061308-generic_6.13.8-061308.202503222044_amd64.deb
</code></pre>
<h4>2.1.3 安装内核包</h4>
<pre><code class="language-shell line-numbers">sudo dpkg -i linux-headers-6.13.8-061308_6.13.8-061308.202503222044_all.deb
sudo dpkg -i linux-headers-6.13.8-061308-generic_6.13.8-061308.202503222044_amd64.deb
sudo dpkg -i linux-modules-6.13.8-061308-generic_6.13.8-061308.202503222044_amd64.deb
sudo dpkg -i linux-image-unsigned-6.13.8-061308-generic_6.13.8-061308.202503222044_amd64.deb


#安装过程中，如果遇到依赖问题，可以运行以下命令进行修复
sudo apt --fix-broken install

#更新Grub并重启系统,安装内核后，需要更新Grub引导配置，以确保新内核能够被引导。执行以下命令
sudo update-grub
#完成Grub更新后，重启系统以加载新内核
sudo reboot

#验证内核升级
uname -r
#6.13.8-061308-generic
</code></pre>
<h3>2.2 gcc升级14.2</h3>
<h4>2.2.1 检查当前GCC版本</h4>
<pre><code class="language-shell line-numbers">gcc --version
</code></pre>
<h4>2.2.2 使用APT工具自动安装</h4>
<p>Ubuntu的APT包管理工具可以方便地安装GCC，但默认安装的可能是最新版本。要安装特定版本，可以使用以下命令：</p>
<pre><code class="language-shell line-numbers">sudo apt update
sudo apt install gcc-&lt;version&gt; g++-&lt;version&gt;

#例如，要安装GCC 14.2.0版本，可以输入：
sudo apt install gcc-14 g++-14
</code></pre>
<h3>三、安装NVIDIA驱动</h3>
<table>
<thead>
<tr>
<th>Distribution</th>
<th>$distro</th>
<th>$arch</th>
<th>$arch_ext</th>
</tr>
</thead>
<tbody>
<tr>
<td>Ubuntu 24.04 LTS</td>
<td>ubuntu2404</td>
<td>x86_64</td>
<td>amd64</td>
</tr>
</tbody>
</table>
<p>执行安装前准备工作。当前正在运行的内核的内核头文件和开发包可以通过以下方式安装：</p>
<pre><code class="language-shell line-numbers">apt install linux-headers-$(uname -r)
</code></pre>
<h4>3.1 下载 NVIDIA 驱动程序：</h4>
<pre><code class="language-shell line-numbers">wget https://developer.download.nvidia.com/compute/nvidia-driver/<span class="katex math inline">version/local_installers/nvidia-driver-local-repo-</span>distro-<span class="katex math inline">version_</span>arch_ext.deb
#根据系统版本及驱动版，替换相应参数，即真实地址：
https://developer.download.nvidia.com/compute/nvidia-driver/570.124.06/local_installers/nvidia-driver-local-repo-ubuntu2404-570.124.06_1.0-1_amd64.deb
#NVIDIA 驱动程序版本 $version
</code></pre>
<h4>3.2 安装Nvidia驱动：</h4>
<pre><code class="language-shell line-numbers">dpkg -i nvidia-driver-local-repo-ubuntu2404-570.124.06_1.0-1_amd64.deb
apt update
注册临时公有 GPG 密钥：
cp /var/nvidia-driver-local-repo-ubuntu2404-570.124.06/nvidia-driver-*-keyring.gpg /usr/share/keyrings/
</code></pre>
<h4>3.3 Network Repository 安装</h4>
<p>安装新的 cuda-keyring 软件包：</p>
<pre><code class="language-shell line-numbers">wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-keyring_1.1-1_all.deb
dpkg -i cuda-keyring_1.1-1_all.deb
apt update
</code></pre>
<h4>3.4 启动驱动程序安装</h4>
<pre><code class="language-shell line-numbers"># Open Kernel Modules
apt install nvidia-open
# Proprietary Kernel Modules
apt install cuda-drivers
# 重启系统
reboot
</code></pre>
<h3>四、验证效果</h3>
<pre><code class="language-shell line-numbers">nvidia-smi
</code></pre>
<p><a class="wp-editor-md-post-content-link" href="https://www.laobaiblog.top/wp-content/uploads/2025/03/wp_editor_md_a7a70e297505c94abc9093406fc8d7a4.jpg"><img decoding="async" src="https://www.laobaiblog.top/wp-content/uploads/2025/03/wp_editor_md_a7a70e297505c94abc9093406fc8d7a4.jpg" alt="" /></a></p>
<p><a class="wp-editor-md-post-content-link" href="https://docs.nvidia.com/datacenter/tesla/driver-installation-guide/index.html">官网参考地址</a>：https://docs.nvidia.com/datacenter/tesla/driver-installation-guide/index.html</p>
<p><a href="https://www.laobaiblog.top/2025/03/31/ubuntu24-04-tls%e6%a1%8c%e9%9d%a2%e7%89%88%e5%ae%89%e8%a3%85nvidia%e6%98%be%e5%8d%a1%e9%a9%b1%e5%8a%a8/">Ubuntu24.04 TLS桌面版安装Nvidia显卡驱动</a>最先出现在<a href="https://www.laobaiblog.top">Liao&#039;s blog</a>。</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Docker安装nps实现[OpenWrt]内网穿透</title>
		<link>https://www.laobaiblog.top/2022/11/02/docker%e5%ae%89%e8%a3%85nps%e5%ae%9e%e7%8e%b0openwrt%e5%86%85%e7%bd%91%e7%a9%bf%e9%80%8f/</link>
		
		<dc:creator><![CDATA[大白]]></dc:creator>
		<pubDate>Wed, 02 Nov 2022 06:36:43 +0000</pubDate>
				<category><![CDATA[Linux]]></category>
		<category><![CDATA[分享]]></category>
		<category><![CDATA[linux]]></category>
		<category><![CDATA[nps]]></category>
		<category><![CDATA[openwrt]]></category>
		<category><![CDATA[ubuntu]]></category>
		<category><![CDATA[内网穿透]]></category>
		<guid isPermaLink="false">https://www.laobaiblog.top/?p=334</guid>

					<description><![CDATA[<p>一、NPS 概述 NPS 是一款轻量级、功能强大的内网穿透代理服务器。支持 tcp、udp 流量转发 &#8230;</p>
<p><a href="https://www.laobaiblog.top/2022/11/02/docker%e5%ae%89%e8%a3%85nps%e5%ae%9e%e7%8e%b0openwrt%e5%86%85%e7%bd%91%e7%a9%bf%e9%80%8f/">Docker安装nps实现[OpenWrt]内网穿透</a>最先出现在<a href="https://www.laobaiblog.top">Liao&#039;s blog</a>。</p>
]]></description>
										<content:encoded><![CDATA[<h3>一、NPS 概述</h3>
<p>NPS 是一款轻量级、功能强大的内网穿透代理服务器。支持 tcp、udp 流量转发，支持内网 http 代理、内网 socks5 代理，同时支持 snappy 压缩、站点保护、加密传输、多路复用、header 修改等。支持 web 图形化管理，集成多用户模式。管理系统比 FRP 方便很多，更容易上手。</p>
<h3>二、配置要求</h3>
<ul>
<li>一台拥有公网 IP 的服务器。</li>
<li>开放端口例如：(8080,8024,80,443）等可以不是这些端口，nps可以配置任意端口，实现与其它服务共存。</li>
</ul>
<h3>三、相关文档</h3>
<ul>
<li>NPS 配置文档：https://ehang-io.github.io/nps/#/</li>
<li>NPS 安装包：https://github.com/ehang-io/nps/releases</li>
<li>NPS 源码：https://github.com/ehang-io/nps</li>
</ul>
<h3>四、环境说明</h3>
<ul>
<li>服务器端是我的Ubuntu(arm)云服务器；</li>
<li><strong>客户端是本地N1盒子Openwrt系统的nps插件（无需安装客户端）</strong>。</li>
</ul>
<h3>五、演示安装</h3>
<h4>1. 服务端</h4>
<ul>
<li><strong>Docker拉取Nps镜像</strong></li>
</ul>
<pre><code class="language-shell line-numbers">#拉取nps镜像
docker pull ffdfgdfg/nps
#创建nps配置文件目录
mkdir -p /root/nps/conf
</code></pre>
<ul>
<li>github下载配置文件，<strong>将nps-master下的conf文件拷贝至服务器上的/root/nps/conf目录下</strong></li>
</ul>
<p><a class="wp-editor-md-post-content-link" href="https://www.laobaiblog.top/wp-content/uploads/2022/11/wp_editor_md_265c4dd6295be6b20c0ebbaea52b7f2f.jpg"><img decoding="async" src="https://www.laobaiblog.top/wp-content/uploads/2022/11/wp_editor_md_265c4dd6295be6b20c0ebbaea52b7f2f.jpg" alt="" /></a></p>
<p><span id="more-334"></span></p>
<p><a class="wp-editor-md-post-content-link" href="https://www.laobaiblog.top/wp-content/uploads/2022/11/wp_editor_md_5f324c40e1444e06a3980f7f451b335a.jpg"><img decoding="async" src="https://www.laobaiblog.top/wp-content/uploads/2022/11/wp_editor_md_5f324c40e1444e06a3980f7f451b335a.jpg" alt="" /></a></p>
<ul>
<li><strong>修改nps.conf配置</strong></li>
</ul>
<pre><code class="language-shell line-numbers">#编辑nps.conf
vim /root/nps/conf/nps.conf

#nps.conf
appname = nps
runmode = dev

http_proxy_ip=0.0.0.0
http_proxy_port=19000
https_proxy_port=19001
https_just_proxy=true

https_default_cert_file=conf/server.pem
https_default_key_file=conf/server.key

bridge_type=tcp
bridge_port=19002
bridge_ip=0.0.0.0

public_vkey=123

log_level=7

web_host=a.o.com
web_username=admin
web_password=123
web_port = 19003
web_ip=0.0.0.0
web_base_url=
web_open_ssl=false
web_cert_file=conf/server.pem
web_key_file=conf/server.key
auth_crypt_key =1234567887654321
allow_user_login=false
allow_user_register=false
allow_user_change_username=false
allow_flow_limit=false
allow_rate_limit=false
allow_tunnel_num_limit=false
allow_local_proxy=false
allow_connection_num_limit=false
allow_multi_ip=false
system_info_display=false
http_cache=false
http_cache_length=100
http_add_origin_header=false[object Object]


</code></pre>
<ul>
<li><strong>docker启动nps，注意防火墙是否开放相关端口</strong></li>
</ul>
<pre><code class="language-shell line-numbers">#启动nps，端口开放19000-19010
docker run -d -p 19000-19010:19000-19010 -v /root/nps/conf:/conf --name=nps --restart=always ffdfgdfg/nps
</code></pre>
<ul>
<li><strong>访问Nps后台：http://IP:19003 默认账号密码是配置文件中的admin/123(可自行更改)。</strong></li>
</ul>
<p><a class="wp-editor-md-post-content-link" href="https://www.laobaiblog.top/wp-content/uploads/2022/11/wp_editor_md_dc27a065af42956f136ee1424fd069e9.jpg"><img decoding="async" src="https://www.laobaiblog.top/wp-content/uploads/2022/11/wp_editor_md_dc27a065af42956f136ee1424fd069e9.jpg" alt="" /></a></p>
<ul>
<li><strong>创建nps客户端及tcp隧道</strong></li>
</ul>
<p><a class="wp-editor-md-post-content-link" href="https://www.laobaiblog.top/wp-content/uploads/2022/11/wp_editor_md_1f47862aa69dd09a9dd397316ba0d9cc.jpg"><img decoding="async" src="https://www.laobaiblog.top/wp-content/uploads/2022/11/wp_editor_md_1f47862aa69dd09a9dd397316ba0d9cc.jpg" alt="" /></a></p>
<p><!--more--></p>
<p>创建客户端，点击客户端详情查看 <strong>-server/-key</strong> 两个参数。</p>
<p><a class="wp-editor-md-post-content-link" href="https://www.laobaiblog.top/wp-content/uploads/2022/11/wp_editor_md_3e67ef83c08e03524f834e442df1a1a9.jpg"><img decoding="async" src="https://www.laobaiblog.top/wp-content/uploads/2022/11/wp_editor_md_3e67ef83c08e03524f834e442df1a1a9.jpg" alt="" /></a></p>
<h4>2. 客户端</h4>
<ul>
<li><strong>进入内网OpenWrt，打开nps插件，输入server及key参数保存，在服务端即可发现客户端已连接。</strong></li>
</ul>
<p><a class="wp-editor-md-post-content-link" href="https://www.laobaiblog.top/wp-content/uploads/2022/11/wp_editor_md_fa3e4010917fd1997f207478acc8c24f.jpg"><img decoding="async" src="https://www.laobaiblog.top/wp-content/uploads/2022/11/wp_editor_md_fa3e4010917fd1997f207478acc8c24f.jpg" alt="" /></a></p>
<h4>3. 配置转发</h4>
<ul>
<li><strong>查看客户端状态</strong></li>
</ul>
<p>此处可以看到一个客户端ID为6，连接状态为在线。</p>
<p><a class="wp-editor-md-post-content-link" href="https://www.laobaiblog.top/wp-content/uploads/2022/11/wp_editor_md_701405fdb3565a5c8c329c7b615ee226.jpg"><img decoding="async" src="https://www.laobaiblog.top/wp-content/uploads/2022/11/wp_editor_md_701405fdb3565a5c8c329c7b615ee226.jpg" alt="" /></a></p>
<ul>
<li><strong>添加tcp隧道转发</strong></li>
</ul>
<p>如下配置表示<strong>将服务端的1XXX6端口转发到局域网内192.168.50.1/3的80/5244等端口</strong>。其他的UDP、SOCKS、HTTP也是类似的配置不再赘述。</p>
<p><a class="wp-editor-md-post-content-link" href="https://www.laobaiblog.top/wp-content/uploads/2022/11/wp_editor_md_7a46b904fdeb607003c0270434ee4951.jpg"><img decoding="async" src="https://www.laobaiblog.top/wp-content/uploads/2022/11/wp_editor_md_7a46b904fdeb607003c0270434ee4951.jpg" alt="" /></a></p>
<h4>4. 总结</h4>
<p>NPS及OpenWrt的配置安装都很便捷，如果你有一台自己的VPS云服务器，可以尝试一下，将你的家庭本地电脑映射到外网服务器。<strong>访问公网IP+对应1XXXX端口即可访问到内网对应服务。</strong></p>
<h4>5. 参考文章</h4>
<p>不同的安装环境可以看看其他博文：</p>
<p>超详细openwrt内网穿透-nps小白教程：https://vpsxb.net/419/<br />
软路由openwrt之nps内网穿透插件服务：https://www.vjsun.com/424.html<br />
nps实现内网穿透，电脑免费变云服务器：https://tangly1024.com/article/nps-centos-nat-traversal<br />
N1盒子op系统nps内网穿透对接腾讯云傻妞：https://blog.csdn.net/hebine7/article/details/122373365<br />
docker安装nps实现内网穿透：https://5616760.com/docker/nps/2021/08/09/docker-nps.html</p>
<p><a href="https://www.laobaiblog.top/2022/11/02/docker%e5%ae%89%e8%a3%85nps%e5%ae%9e%e7%8e%b0openwrt%e5%86%85%e7%bd%91%e7%a9%bf%e9%80%8f/">Docker安装nps实现[OpenWrt]内网穿透</a>最先出现在<a href="https://www.laobaiblog.top">Liao&#039;s blog</a>。</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
