Semantic Oppositeness Embedding Using an Autoencoder-based Learning Model 
Database and Expert Systems Applications
Semantic oppositeness is the natural counterpart of the much popular natural language processing concept, semantic similarity. Much like how semantic similarity is a measure of the degree to which two concepts are similar, semantic oppositeness yields the degree to which two concepts would oppose each other. This complimentary nature has resulted in most applications and researches incorrectly assuming semantic oppositeness to be the inverse of semantic similarity. In other trivializations, semantic oppositeness is used interchangeably with antonymy, which is as inaccurate as replacing semantic similarity with simple synonymy. These erroneous assumptions and over simplifications are used mainly due either the lack of information or the computational complexity of calculation of semantic oppositeness. The objective of this research is to prove that it is possible to extend the idea of word vector embedding to incorporate semantic oppositeness so that an effective mapping of semantic oppositeness can be obtained in a given vector space. In the experiments we present in this paper we show that our proposed method achieves a training accuracy of 97.91\% and a test accuracy of 97.82\% proving the applicability of this method even in potentially highly sensitive applications. Further, apart from the aforementioned main research contribution, this work also introduces a novel unanchored vector embedding method and a novel inductive transfer learning process.
Keywords: Natural Language Processing | Machine Learning / Deep Learning | Big Data | Semantic Oppositeness | Autoencoder | Transfer Learning | Unanchored Learning |