下拉刷新
Repository Details
Shared bynavbar_avatar
repo_avatar
HelloGitHub Rating
10.0
1 ratings
Google 神级语言表示模型的 PyTorch 预训练模型和 PyTorch 框架结合,使得更加容易上手
FreeApache-2.0
Claim
Collect
Share
149k
Stars
Yes
Chinese
Python
Language
Yes
Active
3k
Contributors
2k
Issues
Yes
Organization
4.56.0
Latest
3w
Forks
Apache-2.0
License
More
Google 神级语言表示模型的 PyTorch 预训练模型和 PyTorch 框架结合,使得更加容易上手。PyTorch 版本更方便小白上手实验。示例代码: ```python import torch from pytorch_pretrained_bert import BertTokenizer, BertModel, BertForMaskedLM # Load pre-trained model tokenizer (vocabulary) tokenizer = BertTokenizer.from_pretrained('bert-base-uncased') # Tokenized input text = "Who was Jim Henson ? Jim Henson was a puppeteer" tokenized_text = tokenizer.tokenize(text) # Mask a token that we will try to predict back with `BertForMaskedLM` masked_index = 6 tokenized_text[masked_index] = '[MASK]' assert tokenized_text == ['who', 'was', 'jim', 'henson', '?', 'jim', '[MASK]', 'was', 'a', 'puppet', '##eer'] # Convert token to vocabulary indices indexed_tokens = tokenizer.convert_tokens_to_ids(tokenized_text) # Define sentence A and B indices associated to 1st and 2nd sentences (see paper) segments_ids = [0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1] # Convert inputs to PyTorch tensors tokens_tensor = torch.tensor([indexed_tokens]) segments_tensors = torch.tensor([segments_ids]) ```
Included in:
Vol.34

Comments

Rating:
No comments yet