ymcui
Follow
Focusing
NLP Researcher. Mainly interested in Pre-trained Language Model, Machine Reading Comprehension, Question Answering, etc.
-
Joint Laboratory of HIT and iFLYTEK Research (HFL)
- Beijing, China
-
08:18
(UTC +08:00) - http://ymcui.github.io
- https://orcid.org/0000-0002-2452-375X
- @KCrosner
- https://scholar.google.com/citations?user=Xl53m0QAAAAJ&hl=en
Highlights
Pinned Loading
-
Chinese-LLaMA-Alpaca
Chinese-LLaMA-Alpaca PublicZhong Wen LLaMA&AlpacaDa Yu Yan Mo Xing +Ben Di CPU/GPUXun Lian Bu Shu (Chinese LLaMA & Alpaca LLMs)
-
Chinese-LLaMA-Alpaca-2
Chinese-LLaMA-Alpaca-2 PublicZhong Wen LLaMA-2 & Alpaca-2Da Mo Xing Er Qi Xiang Mu + 64KChao Chang Shang Xia Wen Mo Xing (Chinese LLaMA-2 & Alpaca-2 LLMs with 64K long context models)
-
Chinese-LLaMA-Alpaca-3
Chinese-LLaMA-Alpaca-3 PublicZhong Wen Yang Tuo Da Mo Xing San Qi Xiang Mu (Chinese Llama-3 LLMs) developed from Meta Llama 3
-
Chinese-BERT-wwm
Chinese-BERT-wwm PublicPre-Training with Whole Word Masking for Chinese BERT(Zhong Wen BERT-wwmXi Lie Mo Xing )
-
Chinese-Mixtral
Chinese-Mixtral PublicZhong Wen MixtralHun He Zhuan Jia Da Mo Xing (Chinese Mixtral MoE LLMs)
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.