What does 'Linear regularities among words' mean?
$begingroup$
Context: In the paper "Efficient Estimation of Word Representations in Vector Space" by T. Mikolov et al., the authors make use of the phrase: 'Linear regularities among words'.
What does that mean in the context of the paper, or in a general context related to NLP?
Quoting the paragraph from the paper:
Somewhat surprisingly, it was found that similarity of word
representations goes beyond simple syntactic regularities. Using a
word offset technique where simple algebraic operations are performed
on the word vectors, it was shown for example that vector(”King”) -
vector(”Man”) + vector(”Woman”) results in a vector that is closest to
the vector representation of the word Queen [20].
In this paper, we try to maximize accuracy of these vector operations
by developing new model architectures that preserve the linear
regularities among words. We design a new comprehensive test set for
measuring both syntactic and semantic regularities1 , and show that
many such regularities can be learned with high accuracy. Moreover, we
discuss how training time and accuracy depends on the dimensionality
of the word vectors and on the amount of the training data.
nlp language-model representation
$endgroup$
add a comment |
$begingroup$
Context: In the paper "Efficient Estimation of Word Representations in Vector Space" by T. Mikolov et al., the authors make use of the phrase: 'Linear regularities among words'.
What does that mean in the context of the paper, or in a general context related to NLP?
Quoting the paragraph from the paper:
Somewhat surprisingly, it was found that similarity of word
representations goes beyond simple syntactic regularities. Using a
word offset technique where simple algebraic operations are performed
on the word vectors, it was shown for example that vector(”King”) -
vector(”Man”) + vector(”Woman”) results in a vector that is closest to
the vector representation of the word Queen [20].
In this paper, we try to maximize accuracy of these vector operations
by developing new model architectures that preserve the linear
regularities among words. We design a new comprehensive test set for
measuring both syntactic and semantic regularities1 , and show that
many such regularities can be learned with high accuracy. Moreover, we
discuss how training time and accuracy depends on the dimensionality
of the word vectors and on the amount of the training data.
nlp language-model representation
$endgroup$
add a comment |
$begingroup$
Context: In the paper "Efficient Estimation of Word Representations in Vector Space" by T. Mikolov et al., the authors make use of the phrase: 'Linear regularities among words'.
What does that mean in the context of the paper, or in a general context related to NLP?
Quoting the paragraph from the paper:
Somewhat surprisingly, it was found that similarity of word
representations goes beyond simple syntactic regularities. Using a
word offset technique where simple algebraic operations are performed
on the word vectors, it was shown for example that vector(”King”) -
vector(”Man”) + vector(”Woman”) results in a vector that is closest to
the vector representation of the word Queen [20].
In this paper, we try to maximize accuracy of these vector operations
by developing new model architectures that preserve the linear
regularities among words. We design a new comprehensive test set for
measuring both syntactic and semantic regularities1 , and show that
many such regularities can be learned with high accuracy. Moreover, we
discuss how training time and accuracy depends on the dimensionality
of the word vectors and on the amount of the training data.
nlp language-model representation
$endgroup$
Context: In the paper "Efficient Estimation of Word Representations in Vector Space" by T. Mikolov et al., the authors make use of the phrase: 'Linear regularities among words'.
What does that mean in the context of the paper, or in a general context related to NLP?
Quoting the paragraph from the paper:
Somewhat surprisingly, it was found that similarity of word
representations goes beyond simple syntactic regularities. Using a
word offset technique where simple algebraic operations are performed
on the word vectors, it was shown for example that vector(”King”) -
vector(”Man”) + vector(”Woman”) results in a vector that is closest to
the vector representation of the word Queen [20].
In this paper, we try to maximize accuracy of these vector operations
by developing new model architectures that preserve the linear
regularities among words. We design a new comprehensive test set for
measuring both syntactic and semantic regularities1 , and show that
many such regularities can be learned with high accuracy. Moreover, we
discuss how training time and accuracy depends on the dimensionality
of the word vectors and on the amount of the training data.
nlp language-model representation
nlp language-model representation
asked 14 hours ago
Dawny33♦Dawny33
5,45183188
5,45183188
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
By linear regularities among words, he meant that "Vectorized form of words should follow linear additive properties!"
V("King") - V("Man") + V("Woman") ~ V("Queen)
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f46648%2fwhat-does-linear-regularities-among-words-mean%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
By linear regularities among words, he meant that "Vectorized form of words should follow linear additive properties!"
V("King") - V("Man") + V("Woman") ~ V("Queen)
$endgroup$
add a comment |
$begingroup$
By linear regularities among words, he meant that "Vectorized form of words should follow linear additive properties!"
V("King") - V("Man") + V("Woman") ~ V("Queen)
$endgroup$
add a comment |
$begingroup$
By linear regularities among words, he meant that "Vectorized form of words should follow linear additive properties!"
V("King") - V("Man") + V("Woman") ~ V("Queen)
$endgroup$
By linear regularities among words, he meant that "Vectorized form of words should follow linear additive properties!"
V("King") - V("Man") + V("Woman") ~ V("Queen)
answered 13 hours ago
PreetPreet
2563
2563
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f46648%2fwhat-does-linear-regularities-among-words-mean%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown