Knowledge Graph Embedding by Translating on Hyperplanes阅读笔记

Authors proposed TransH which models a relation as a hyperplane together with a translation operation on it. It solves the problem of multi-relation and makes a good trade-off between model capacity and efficiency.

# research objective

• solves the problem of multi-relation
• makes a good trade-off between model capacity and efficiency

# Problem Statement

• TransE can’t deal with reflexive, one-to-many, many-to-many and many -to-one relations
• some complex model sacrifice efficiency in the process(although can deal with transE’s problem)

# Contribution

• proposing a method named translation on hyperplanes(TransH)
• interpreting a relation as a translating operation on a hyperplane
• proposing a simple trick to reduce the chance of false negative labeling

# Embedding by Translating on Hyperplanes

## Relations’ Mapping Properties in Embedding

transE

• the representation of an entity is the same when involved in any relations, ignoring distributed representations of entities when invovled in different relaions

## Translating on Hyperplanes (TransH)

#### 目标函数

scoring function：

As the hyperplane $W_r$, the $w_r$ is the normal vector of it, and $\left|w_{r}\right|_{2}^{2}=1$, so the projection $h$ in $w_r$ is:

the score function is:

#### Training

loss function consists of margin-based ranking loss and some constraints:

the constraints:

• the second grantees the translation vectot $d_r$ is in the hyperplane
• they project each $w_r$ to unit $l_2$-ball before visiting each mini-batch

• 是不是不同代表同一含义的投影表示应该相同或者相似
• 这样是不是可以解决同一个实体的多义性问题。

## Reducing Ralse Negative Labels

Authors set different probabilities for replacing the head or tail entity depending on the mapping property of the relation (one-to-many, many-to-one, many-to-many)

• give more chance to replacing the head entity if the relation is one-to-many

• 分别统计每个头实体对应尾实体的数量（反之亦然），按占比进行生成负样例
• 通过这样的方式，例如one-many关系，替换头实体显然更不容易得到正样例（因为只有一种头实体是对的，然而替换尾实体因为对于头实体对应该关系的尾实体更多，说不定就有其他不在此many中的尾实体符合这个关系。
• 相比之下我认为在《Bootstrapping-Entity-Alignment-with-Knowledge-Graph-Embedding》采用的均匀截断负采样效果会更好一些

# Experiments

the detail can be seen in the paper

### outperform TransE in one-to-one

Authors explain:

• entities are connected with relations so that better embeddings of some parts lead to better results on the whole.

## Triplets Classification

This means FB13 is a very dense subgraph where strong correlations exist between entities

## Relational Fact Extraction from Text

• Actually, knowledge graph embedding is able to score a candidate fact, without observing any evidence from ex- ternal text corpus

# Reference

上一篇
Learning Knowledge Embeddings by Combining Limit-based Scoring Loss阅读笔记

2019-06-03

Attention Is All You Need阅读笔记
transformer 是一个完全由注意力机制组成的搭建的模型，模型复杂度低，并可以进行并行计算，使得计算速度快。在翻译模型上取得了较好的效果。本篇论文属于经典必读论文，阅读笔记中对一些不清楚的地方进行了汉语解释，读完论文后阅读参考链接以
2019-05-25
目录