Emotion in human language: training deep learning neural nets for emotional quantifier discovery in predictive linguistics applications.

by clif high, Saturday, January 18, 2014 5:37am

with respect...

Our predictive linguistics method is composed of multiple layers of processing. Unlike those recent predictive linguistic applications which read and interpret immediacy data streams such as Twitter, the method developed here at halfpasthuman employs algorithms which do not rely on word tense, counts, frequency or relative placement. Unlike deterministic programs (focused at specific 'sites' and word patterns in a deterministic 'hunt') which attempt to be predictive, our processing actually enhances serendipitous discovery of future forecasting linguistic patterns through the use of trained, multiply layered, neural net software technology.

The method i developed (from 1993 through 1997) uses emotional quantifiers and the change of emotional 'tone' (a weighted sum of emotional quantifiers) over time within specific contexts (conversational domains) as the 'mechanism' for the forecast of linguistic elements/structures to appear in the future. Further by weighting additional, more subtle and time based, emotional quantifiers within our multiple dimensional array of such, we are able to provide (albeit crudely) a form of 'timing clues' to the approximate point (in the future) of the appearance of the forecast linguistic structure.

At its core, the predictive linguistics method that i invented is an 'expert system'. This aspect of my processing was developed using Prolog (by LPA Associates out of UK) in a 'pure' form of a 'self compiling interpreter'.

The top level expert system is most frequently referred to, within our writings/reports, as 'modelspace'. This is a graphical display of the results of the processing by the expert system of the linguistic structures/contexts gathered in the most recent data processing 'sweep' that presents our forecast linguistics in a complex fashion that provides the emotional quantifier sums as graphics clues (color, hue, intensity, pixel clustering).

Modelspace provides a display of the completed processing of the collected data. Most of our efforts in processing are placed in the neural net layer that analyzes the data (word patterns/linguistic structures) and applies the emotional quantifiers to the words within their current (captured) context (the domain of the conversation from which the words were extracted).

The neural net i constructed and refined from 1995 to 2005 is built primarily in C (deep C functions executed via prolog predicates) and C++. There is fundamentally nothing particularly noteworthy about my neural net excepting that i was using multiple layers (cross monitored) of nets in the mid 1990's in an effort to assign emotional quantifiers to the word/context (domains) as they were encountered in real time (by the processing net).

The neural net uses a lexicon (data base) of reference values for the words to 'recognize' or 'learn' the current use of the word within the domain/context, and to apply the 'predictive elements' to the multi-dimensional array of [current values] that each word carries through our processing. The array is defined by 8/eight columns, and 8/eight rows with a potential (time based) of 10/ten layers of the 64/sixty-four element grid possible.

As has been noted in our results at predicting future linguistic patterns (descriptions of event to appear) over these past 20 years, THE key component is the constant 'tuning' of the lexicon base reference values. This is due to the continuous 'time shift' of human (natural) language, as well as the continuous creation of new language, and the continuous re-association of time elements to current usage of language. It is also important to note that, within the predictive part of my work here, that the above is true without regard to which human alphabet, or actual language is being spoken/typed.

Words change continuously in both their meaning, and their usage within the domains employed in communications, and thus the 'timing clues' embedded within that communication also changes continuously, thus the need for frequent 'tuning' of the lexicon.

While lexicon tuning is a prime candidate for deep learning neural net processing, the results so far encountered have been disappointing, and thus we still are employing the 'old fashioned', non-automated form of 'human tuning' of the lexicon.

The neural net that i employ in the processing works at placing the words/linguistic structures discovered within a set of values relative to a grid overlaid on modelspace (basic word cloud/cluster with an emotive twist). This grid has time based components, but is, in the main, focused on emotional quantifiers. These numeric values are spread across modelspace relative to the grid of values (see graphic below).

The 'building/release tension' scale is one of my own devising that relates to 'human body expression of emotions'. This scale represents the 'dynamic' state of the emotional quantifier and its aspects of the array are necessarily also the most dynamic within my system.

The mid-point of my range is represented by a 'zero point' in which there is an absence of emotional tension in the body (beyond the emotional tension to qualify as 'alive'). This zero point is also the mid point in our range of emotional states (love to hate and all in between). The mid-point of the emotional state elements of the array is actually defined as 'neutral indifference' within the expert system. This is a subtle element that has demonstrated itself to be also very powerful in concept. The 'neutral indifference' mid-point is also the middle of a tighter range that is 'positive indifference' shading over to 'negative indifference'.

In the training of the neural net (a process repeated with each data gathering 'run'), i had noticed that natural human language expresses itself in a distinctly asymmetric fashion (thus the title of our reports as Asymmetric Linguistic Trend Analysis or ALTA) that places far fewer word clusters within the 'sweet spot' of the upper right quarter of the graphic which represents the sets for [increasing tension release] combined with [increasing positive emotional state]. As i also discovered in the training of the neural net as well as the continual tuning of the lexicon/database, the 'natural tendency' is for three high concentration areas to develop that are within the 'negative-to-negative' set formation areas of the upper left (in the graphic), lower left, and lower right quadrants. These three areas are all dominated by a [negative bias] toward the [emotional experience].

Somewhat curiously, i have discovered over time, that the most accurate predictions are made from robust sets (i employ 'fuzzy sets' theory in the interpretation of the data) that are within the [positive to positive] area of the far less populated, upper right quadrant (in our graphic above). This is generally true of both the nature of the forecast/prediction of linguistics to appear, as well as its 'timing clues'.

As a specific example, the 'blonds on boats' forecast (made a number of months in advance) of the sinking of the Concordia cruise ship off the coast of Italy was a conjunction and a union of sets that involved the upper middle of the graphic above. The [blond] was a [love object] who had a very high level of immediacy and shorter term values associated with her appearance. The [love object] was conjunctive to a large set dominated by [boat] that was described (adequately) as a [ship of state] both within the data sets and the forecast that was taken from them. That the [boat] set was shifting towards the [building tensions] end of the scale as modelspace was being populated and progressed over time indicated the [negative] consequences that would arise from the [positive] 'love object/interest' set that first identified this as an emotionally (and therefore predictively) hot area of modelspace. Our geographic indicators were also accurate for this prediction due to the concentration of geographic descriptors that were later used in describing just where, and how the cruise ship met its fate.

In training my neural net, i have the advantage of the lexicon i have been developing and maintaining since 1993 when the 'language model' first occurred to me. This lexicon could likely be removed from the system now, without penalty, as the art of neural net training has advanced significantly from my first implementation of dual input net nodes.

By employing a 'master net' able to set learning parameters to the emotional quantifiers by way of reference to previous lexicon 'tunings', it should be possible to 'automate' the process of emotional quantifier discovery such that the necessity of human 'tweaking' is reduced. How much it may be reduced is an unknown at this time. However, having observed the changes in language at a microscopic and macroscopic level over these last two decades, i can confidently state that the 'rules' by which language changes may be able to be codified such that the neural net could become a natural language learning machine, albeit one tied to a base interpretation of those emotions and their reduction to numeric values (a dodgey proposition at best, and one filled with opportunities for errors and f'uk ups). It is clear that no machine can learn emotion without human participation as emotion is purely human. It is also clear that the attempt to quantify emotions via reduction to numeral values for 'mathematics' is an effort doomed to some degree of failure as the subtle nature of individual human expression of emotion would necessarily defy reduction to machine code. Basically, until a machine can have 'skin in the game', predictive linguistics are always going to be a 'once-removed' effort doomed to never fully succeed by that degree of separation.

Without regard to the potential for a 'self training' neural net application within the predictive linguistics area, i have to note that the most accurate forecasts (compiled from 20+ years of this work) are directly related to the accuracy of the emotional quantifiers AND their status within the context/domain from which they are extracted. In other words, it is still a case of the psychic nature of humanity 'leaking' out from their writings that allows 'predictive linguistics' to exist as a 'fuzzy discipline' and the ability to capture that psychic leak by way of annotating these subtle changes in language dynamically.

And, since emotions are involved in future forecast discovery....well, y'all know what them humans are like...especially when they get emotional.