Commit 694d8433 authored by saeedm's avatar saeedm
Browse files

Initial commit

parents
# T5 Annotator
## Installation
```bash
pip install -r requirements.txt
```
## Training
For training, only the training file is required. The file should contain the ambiguous label and a list of possible alternatives. Check ```train_sample.json``` for examples. The script fine-tunes a model and saves it with its tokenizer. A json files containing some statistics is also outputed.
```bash
!python T5AnnotatorTrain.py --train_file train_sample.json \
--verbose \
```
More aspects of the training process can be configured:
```bash
!python T5AnnotatorTrain.py --train_file train_sample.json \
--model_arch t5-base \
--output_path t5_output \
--max_length 100\
--epochs 10\
--batch_size 16\
--learning_rate 1e-4\
--epsilon 1e-6\
--verbose \
--time_step 100
```
* __train_file__: JSON file containing training data.
* __model_arch__: T5 Architecture used
* __output_path__: Path to save the model and tokenizer
* __max_length__: Man length for tokenization
* __epochs__: number of epochs
* __batch_size__: training batch size
* __learning_rate__: Learning Rate
* __epsilon__: AdamW epsilon value
* __verbose__: prints training progress
* __time_step__: sets the number of steps before printing
## Testing
For testing, an input file and model path are required. Check ```sample_input.tsv``` for examples. The script produces a new TSV file with the added annotations from the fine-tuned model. The script caches the output for later use.
```bash
!python T5AnnotatorTest.py --input_file sample_input.tsv \
--model_path t5_output/ \
```
A cache file can be used:
```bash
!python T5AnnotatorTest.py --input_file sample_input.tsv \
--model_path t5_output/ \
--cache_file cache.json \
--gen_steps 10 \
--max_length 100
```
* __input_file__: TSV file containing test data.
* __model_path__: Model Path
* __cache_file__: Cache File in JSON format
* __gen_steps__: Number of decoding steps
* __max_length__: maximum length for tokenization
## License
[MIT](https://choosealicense.com/licenses/mit/)
\ No newline at end of file
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
import pandas as pd
import re
from tqdm import tqdm
import json
import argparse
# Create the parser
my_parser = argparse.ArgumentParser()
# Add the arguments
my_parser.add_argument('--input_file',
metavar='fpath',
type=str,
help='the path to the input file',
required=True)
my_parser.add_argument('--model_path',
metavar='mpath',
type=str,
help='the path to the model and tokenizer',
required=True)
my_parser.add_argument('--cache_file',
metavar='cfile',
type=str,
help='the path to the cache')
my_parser.add_argument('--gen_steps',
metavar='num',
type=int,
default=10,
help='the number of decoder generation steps')
my_parser.add_argument('--max_length',
metavar='ml',
type=int,
default=100,
help='the max length for tokenization')
args = my_parser.parse_args()
input_file = args.input_file
model_path = args.model_path
cache_file = args.cache_file
num = args.gen_steps
MAX_LENGTH = args.max_length
# Load model and tokenizer
model = T5ForConditionalGeneration.from_pretrained(model_path)
tokenizer = T5Tokenizer.from_pretrained(model_path)
device = torch.device("cuda") if torch.cuda.is_available() else torch.device('cpu')
model = model.to(device)
# Load file
df = pd.read_csv(input_file, delimiter='\t', header = None)
df.columns = columns=['Input','Output']
# Load cache
if cache_file:
cachedAttributes = json.load(open(cache_file,"r"))
else:
cachedAttributes = {}
# Generation Function
def pred(x):
model.train()
input_ids = tokenizer.encode_plus(f'ambiguous label: {x}',
max_length=MAX_LENGTH,
truncation=True,
return_tensors='pt',
padding='max_length').input_ids
input_ids = input_ids.to(device)
return tokenizer.decode(model.generate(input_ids=input_ids, temperature=1)[0])
# Extract Attributes
all_attrbs=[]
F = []
for i in tqdm(range(len(df))):
x = df.iloc[i]
M = re.findall(".* attr1: (.*) attr2: (.*)",x['Input'])
if M:
attr1, attr2 = M[0]
all_attrbs.extend([attr1, attr2])
else:
F.append(i)
all_attrbs = list(set(all_attrbs))
# Generate Labels
for attr in tqdm(all_attrbs[:]):
if attr.lower() not in cachedAttributes:
a = [pred(attr.lower()) for _ in range (num)]
if '-' in attr or '_' in attr:
modified_attr = attr.replace('-',"").replace('_',"")
b = [pred(modified_attr.lower()) for _ in range (num)]
a = a+b
filtered_a = list(set([xx for xx in ",".join([x.replace("<pad> ","").replace("</s>","") for x in a]).split(",") if xx]))
cachedAttributes[attr] = filtered_a
cachedAttributes = {k.lower():v for (k,v) in cachedAttributes.items()}
if not cache_file:
cache_file = 'T5_cache.json'
json.dump(cachedAttributes, open(cache_file,"w"))
# Generate T5 Annotations
added_labels=[]
for i in tqdm(range(len(df))):
if i not in F:
x = df.iloc[i]
attr1, attr2 = re.findall(".* attr1: (.*) attr2: (.*)",x['Input'])[0]
joined = [x for x in cachedAttributes[attr1.lower()] if x in cachedAttributes[attr2.lower()]]
joined = [j for j in joined if j not in [str(x['Output']).lower(), attr1.lower(), attr2.lower()]]
joined = list(set(joined))
added_labels.append(joined)
df = df.drop(F,axis = 0)
df['New Labels'] = added_labels
new_df = []
for i in tqdm(range(len(df))):
x = df.iloc[i]
X = x['Input']
Y = [x['Output']] if x['Output'] == 'None' else [x['Output']] + x['New Labels']
separated = [[X,y] for y in Y]
new_df.extend(separated)
final_df = pd.DataFrame(new_df)
final_df = final_df[~final_df.apply(lambda x: True if type(x[1])==str and len((x[1])) == 1 else False,axis=1)]
output_file = input_file.split('.tsv')[0]+'_T5Annotations.tsv'
final_df.to_csv(output_file,sep='\t',index=False)
# TODO order
from random import sample
from transformers import T5ForConditionalGeneration, T5Tokenizer
import torch
from torch.optim import AdamW
from torch.utils.data import TensorDataset
from torch.utils.data import DataLoader, RandomSampler
from tqdm import tqdm
import time
import json
import datetime
import argparse
# Create the parser
my_parser = argparse.ArgumentParser()
# Add the arguments
my_parser.add_argument('--train_file',
metavar='fpath',
type=str,
help='the path to the train file',
required=True)
my_parser.add_argument('--model_arch',
metavar='fpath',
type=str,
default='t5-base',
help='model architecture')
my_parser.add_argument('--output_path',
metavar='op',
type=str,
default='t5_output',
help='output path')
my_parser.add_argument('--max_length',
metavar='ml',
type=int,
default=100,
help='the max length for tokenization')
my_parser.add_argument('--epochs',
metavar='epochs',
type=int,
default=10,
help='number of epochs')
my_parser.add_argument('--batch_size',
metavar='bs',
type=int,
default=16,
help='Training Batch Size')
my_parser.add_argument('--learning_rate',
metavar='lr',
type=float,
default=1e-4,
help='Learning Rate')
my_parser.add_argument('--epsilon',
metavar='eps',
type=float,
default=1e-6,
help='AdamW Epsilon')
my_parser.add_argument('--verbose',
action='store_true',
help='Verbose')
my_parser.add_argument('--time_step',
metavar='ts',
type=int,
default=100,
help='Time Step for Verbose')
args = my_parser.parse_args()
train_file = args.train_file
arch = args.model_arch
MAX_LENGTH = args.max_length
output_path = args.output_path
epochs = args.epochs
TRAIN_BATCH_SIZE = args.batch_size
LEARNING_RATE = args.learning_rate
eps = args.epsilon
verbose = args.verbose
time_step_size = args.time_step
def set_order(a):
return sorted(set(a), key=a.index)
def format_time(elapsed):
elapsed_rounded = int(round((elapsed)))
return str(datetime.timedelta(seconds=elapsed_rounded))
# Load file
data = json.load(open(train_file,"r"))
# Heuristic: extract lists with length more than three, this encourages no substrings
data = [x for x in data if len(x[1])>3]
data = [[data_sample[0],",".join(set_order([yy.strip() for y in [x.split(",") for x in data_sample[1]] for yy in y]))] for data_sample in data]
data = [[x[0].lower(), ",".join([xx.strip()for xx in x[1].lower().split(',')])] for x in data[:]]
# datapoint example: [['ambiguous label: appeared', 'appeared,appearance,appear,film']]
input = [x[0] for x in data]
# Load models
model = T5ForConditionalGeneration.from_pretrained(arch)
tokenizer = T5Tokenizer.from_pretrained(arch)
device = torch.device("cuda") if torch.cuda.is_available() else torch.device('cpu')
model = model.to(device)
train_input_ids_ = []
for i in tqdm(input):
encoded = tokenizer.encode_plus(i,
max_length=MAX_LENGTH,
truncation=True,
return_tensors='pt',
padding='max_length')
train_input_ids_.append(encoded['input_ids'])
train_input_ids = torch.cat(train_input_ids_, dim=0)
labels = [x[1] for x in data]
train_labels=[]
for i in tqdm(labels):
encoded = tokenizer.encode_plus(i,
max_length=MAX_LENGTH,
truncation=True,
return_tensors='pt',
padding='max_length')
train_labels.append(encoded['input_ids'])
train_labels = torch.cat(train_labels, dim = 0)
train_dataset = TensorDataset(train_input_ids,train_labels)
train_dataloader = DataLoader(dataset=train_dataset,
sampler=RandomSampler(train_dataset),
batch_size=TRAIN_BATCH_SIZE,
)
optimizer = AdamW(model.parameters(),
lr=LEARNING_RATE,
eps=eps)
total_steps = len(train_dataloader) * epochs
training_stats = []
total_t0 = time.time()
for epoch_i in range(epochs):
# ========================================
# Training
# ========================================
print("")
print('======== Epoch {:} / {:} ========'.format(epoch_i + 1, epochs))
print('Training...')
t0 = time.time()
total_train_loss = 0.0
model.train()
for step, batch in enumerate(train_dataloader):
if step % time_step_size == 0 and not step == 0:
elapsed = format_time(time.time() - t0)
if verbose:
print(' Batch {:>5,} of {:>5,}. Elapsed: {:}.'.format(step, len(train_dataloader), elapsed))
b_input_ids = batch[0].to(device)
b_labels = batch[1].to(device)
o = model(input_ids=b_input_ids, labels=b_labels)
loss = o.loss
total_train_loss += loss.item()
optimizer.zero_grad()
loss.backward()
optimizer.step()
avg_train_loss = total_train_loss / len(train_dataloader)
training_time = format_time(time.time() - t0)
if verbose:
print("")
print(" Average training loss: {0:.2f}".format(avg_train_loss))
print(" Training epcoh took: {:}".format(training_time))
training_stats.append(
{
'epoch': epoch_i + 1,
'Training Loss': avg_train_loss,
'Training Time': training_time,
}
)
total_train_time = format_time(time.time() - total_t0)
training_stats.append({'total_train_time': total_train_time})
print("")
print("Training complete!")
print("Total training took {:} (h:mm:ss)".format(total_train_time))
model.save_pretrained(output_path)
tokenizer.save_pretrained(output_path)
json.dump(training_stats,open('training_stats.json','w'),indent=5)
\ No newline at end of file
transformers
sentencepiece
\ No newline at end of file
Date|Code|Event|Description attr1: Date attr2: Event birthday
Date|Code|Event|Description attr1: Event attr2: Date birthday
Date|Code|Event|Description attr1: Description attr2: Event None
Date|Code|Event|Description attr1: Event attr2: Code None
Place|Name|Year|Team|Time|Score|Heat(Pl) attr1: Place attr2: Name identify
Place|Name|Year|Team|Time|Score|Heat(Pl) attr1: Place attr2: Score set
Place|Name|Year|Team|Time|Score|Heat(Pl) attr1: Place attr2: Time set
Place|Name|Year|Team|Time|Score|Heat(Pl) attr1: Name attr2: Place identify
Place|Name|Year|Team|Time|Score|Heat(Pl) attr1: Name attr2: Score mark
Place|Name|Year|Team|Time|Score|Heat(Pl) attr1: Time attr2: Place set
Place|Name|Year|Team|Time|Score|Heat(Pl) attr1: Time attr2: Score set
Place|Name|Year|Team|Time|Score|Heat(Pl) attr1: Score attr2: Place set
Place|Name|Year|Team|Time|Score|Heat(Pl) attr1: Score attr2: Name mark
Place|Name|Year|Team|Time|Score|Heat(Pl) attr1: Score attr2: Time set
Place|Name|Year|Team|Time|Score|Heat(Pl) attr1: Team attr2: Place None
Place|Name|Year|Team|Time|Score|Heat(Pl) attr1: Place attr2: Year None
Place|Name|Year|Team|Time|Score|Heat(Pl) attr1: Team attr2: Time None
Place|Name|Year|Team|Time|Score|Heat(Pl) attr1: Team attr2: Name None
Place|Name|Year|Team|Time|Score|Heat(Pl) attr1: Team attr2: Time None
Place|Name|Year|Team|Time|Score|Heat(Pl) attr1: Heat(Pl) attr2: Place None
Place|Name|Year|Team|Time|Score|Heat(Pl) attr1: Year attr2: Score None
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: cover attr2: author book
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: author attr2: cover book
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: num ratings attr2: my rating rating
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: date pub attr2: date pub (ed.) pub
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: date pub (ed.) attr2: date pub pub
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: rating attr2: review evaluation
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: my rating attr2: num ratings rating
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: review attr2: rating evaluation
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: date started attr2: date added dating
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: date started attr2: date read dating
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: date read attr2: date purchased date
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: date read attr2: date started dating
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: date read attr2: date added dating
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: date added attr2: date started dating
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: date added attr2: date read dating
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: date purchased attr2: date read date
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: date started attr2: date purchased None
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: my rating attr2: date started None
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: date added attr2: avg rating None
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: date pub attr2: date read None
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: avg rating attr2: comments None
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: read count attr2: date started None
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: cover attr2: notes None
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: votes attr2: date purchased None
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: author attr2: isbn None
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: num ratings attr2: author None
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: date read attr2: title None
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: review attr2: date read None
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: owned attr2: purchase location None
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: date read attr2: rating None
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: votes attr2: isbn13 None
#|cover|title|author|isbn|isbn13|asin|num pages|avg rating|num ratings|date pub|date pub (ed.)|rating|my rating|review|notes|recommender|comments|votes|read count|date started|date read|date added|date purchased|owned|purchase location|condition|format attr1: condition attr2: author None
Loc #|Frequency|Emission|Class|Units|Pag|Pwr|Lat|Long|City|County|State attr1: Pag attr2: City town
Loc #|Frequency|Emission|Class|Units|Pag|Pwr|Lat|Long|City|County|State attr1: City attr2: Pag town
Loc #|Frequency|Emission|Class|Units|Pag|Pwr|Lat|Long|City|County|State attr1: City attr2: State capital
Loc #|Frequency|Emission|Class|Units|Pag|Pwr|Lat|Long|City|County|State attr1: State attr2: City capital
Loc #|Frequency|Emission|Class|Units|Pag|Pwr|Lat|Long|City|County|State attr1: Long attr2: Class None
Loc #|Frequency|Emission|Class|Units|Pag|Pwr|Lat|Long|City|County|State attr1: Class attr2: State None
Loc #|Frequency|Emission|Class|Units|Pag|Pwr|Lat|Long|City|County|State attr1: Lat attr2: Frequency None
Loc #|Frequency|Emission|Class|Units|Pag|Pwr|Lat|Long|City|County|State attr1: County attr2: County None
Denim|US|UK|AU|FR|IT|JP|China (Tops)|China (Bottoms) attr1: China (Tops) attr2: China (Bottoms) china
Denim|US|UK|AU|FR|IT|JP|China (Tops)|China (Bottoms) attr1: China (Bottoms) attr2: China (Tops) china
Denim|US|UK|AU|FR|IT|JP|China (Tops)|China (Bottoms) attr1: AU attr2: JP None
Denim|US|UK|AU|FR|IT|JP|China (Tops)|China (Bottoms) attr1: IT attr2: IT None
Name|Album|Artist|Time|Price attr1: Name attr2: Artist person
Name|Album|Artist|Time|Price attr1: Artist attr2: Name person
Name|Album|Artist|Time|Price attr1: Time attr2: Price money
Name|Album|Artist|Time|Price attr1: Price attr2: Time money
Name|Album|Artist|Time|Price attr1: Album attr2: Name None
Name|Album|Artist|Time|Price attr1: Artist attr2: Time None
Name|Album|Artist|Time|Price attr1: Name attr2: Album None
Games|Event|Language|Location|Level|Distribution period attr1: Location attr2: Level stage
Games|Event|Language|Location|Level|Distribution period attr1: Location attr2: Level place
Games|Event|Language|Location|Level|Distribution period attr1: Level attr2: Location stage
Games|Event|Language|Location|Level|Distribution period attr1: Level attr2: Location place
Games|Event|Language|Location|Level|Distribution period attr1: Event attr2: Distribution period None
Games|Event|Language|Location|Level|Distribution period attr1: Language attr2: Event None
Games|Event|Language|Location|Level|Distribution period attr1: Level attr2: Distribution period None
Specification|Status|Comment attr1: Specification attr2: Comment document
Specification|Status|Comment attr1: Comment attr2: Specification document
Specification|Status|Comment attr1: Status attr2: Comment None
Specification|Status|Comment attr1: Status attr2: Specification None
Fee|Large entity fee|small entity fee|micro entity fee attr1: Large entity fee attr2: micro entity fee entities
Fee|Large entity fee|small entity fee|micro entity fee attr1: Large entity fee attr2: small entity fee entities
Fee|Large entity fee|small entity fee|micro entity fee attr1: small entity fee attr2: micro entity fee fee
Fee|Large entity fee|small entity fee|micro entity fee attr1: small entity fee attr2: micro entity fee entities
Fee|Large entity fee|small entity fee|micro entity fee attr1: small entity fee attr2: Large entity fee entities
Fee|Large entity fee|small entity fee|micro entity fee attr1: micro entity fee attr2: Large entity fee entities
Fee|Large entity fee|small entity fee|micro entity fee attr1: micro entity fee attr2: small entity fee fee
Fee|Large entity fee|small entity fee|micro entity fee attr1: micro entity fee attr2: small entity fee entities
Fee|Large entity fee|small entity fee|micro entity fee attr1: Fee attr2: Large entity fee None
Guideline|Section|For Information About attr1: Guideline attr2: For Information About information
Guideline|Section|For Information About attr1: For Information About attr2: Guideline information
Guideline|Section|For Information About attr1: Section attr2: Guideline None
Guideline|Section|For Information About attr1: Section attr2: For Information About None
Title|Price|Purchase attr1: Price attr2: Purchase cost
Title|Price|Purchase attr1: Purchase attr2: Price cost
Title|Price|Purchase attr1: Title attr2: Title None
Date|Code|Event|Description attr1: Date attr2: Event birthday
This diff is collapsed.
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment