<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Transformers on Sawyer Zheng's Blog</title><link>https://elated-raman-42e0c2.netlify.app/tags/transformers/</link><description>Recent content in Transformers on Sawyer Zheng's Blog</description><generator>Hugo</generator><language>zh-cn</language><lastBuildDate>Fri, 05 Jan 2024 13:18:49 +0800</lastBuildDate><atom:link href="https://elated-raman-42e0c2.netlify.app/tags/transformers/index.xml" rel="self" type="application/rss+xml"/><item><title>transformers</title><link>https://elated-raman-42e0c2.netlify.app/post/notes/ai/huggingface/transformers/</link><pubDate>Tue, 08 Aug 2023 00:00:00 +0000</pubDate><guid>https://elated-raman-42e0c2.netlify.app/post/notes/ai/huggingface/transformers/</guid><description>&lt;div id="outline-container-headline-1" class="outline-2"&gt;
&lt;h2 id="headline-1"&gt;
参考
&lt;/h2&gt;
&lt;div id="outline-text-headline-1" class="outline-text-2"&gt;
&lt;div id="outline-container-headline-2" class="outline-3"&gt;
&lt;h3 id="headline-2"&gt;
模型训练 GPU、 多 GPU、 CPU、多 CPU 等
&lt;/h3&gt;
&lt;div id="outline-text-headline-2" class="outline-text-3"&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://huggingface.co/docs/transformers/perf_train_gpu_many"&gt;Efficient Training on Multiple GPUs&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://huggingface.co/docs/transformers/performance"&gt;Performance and Scalability&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div id="outline-container-headline-3" class="outline-3"&gt;
&lt;h3 id="headline-3"&gt;
多机器并行训练方法
&lt;/h3&gt;
&lt;div id="outline-text-headline-3" class="outline-text-3"&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href="https://zhuanlan.zhihu.com/p/462722054"&gt;Transformers多机多卡的炼丹实践 - 知乎&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;div id="outline-container-headline-4" class="outline-4"&gt;
&lt;h4 id="headline-4"&gt;
速度慢分析
&lt;/h4&gt;
&lt;div id="outline-text-headline-4" class="outline-text-4"&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href="https://github.com/huggingface/accelerate/issues/192"&gt;huggingface/accelerate#192 The more GPU I use, the slower the training speed.&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href="https://github.com/huggingface/transformers/issues/19918"&gt;huggingface/transformers#19918 Why training on Multiple GPU is slower than tr…&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;/div&gt;
&lt;div id="outline-container-headline-5" class="outline-2"&gt;
&lt;h2 id="headline-5"&gt;
模型转换
&lt;/h2&gt;
&lt;div id="outline-text-headline-5" class="outline-text-2"&gt;
&lt;div id="outline-container-headline-6" class="outline-3"&gt;
&lt;h3 id="headline-6"&gt;
转换成 huggingface transformers 格式
&lt;/h3&gt;
&lt;div id="outline-text-headline-6" class="outline-text-3"&gt;
&lt;p&gt;参考：&lt;/p&gt;</description></item></channel></rss>