Time is money! In today’s fast-paced world, time is the most precious commodity amongst all that is near and dear to us. The time taken to reach to work, the time taken to get your coffee from the machine, the time taken to wait at a pedestrian crossing; we are always working towards reducing the time taken to do things, so we can allocate the saved time to the next task at hand
This phenomenon is most relevant in the customer care space, where every precious minute reduced in answering a query or solving a raised ticket is critical for the service provider.IT Ticket Resolution is one such time-bound service whose success is measured by the swiftness of solving the raised tickets. This essentially means, the lesser the time taken to read, analyze and solve a raised ticket, the better the outlook of the enterprise administering the solution.
IT Tickets or support requests come with various requests ranging from simple to complex. They also carry a variety of emotions ranging from mild annoyance to severe discontent. A mechanism to address ‘priority’ requests is essential to ensure that customers who are very unhappy or distressed be attended prior to others. Ensuring this priority and resolving these issues is proportional to the retention of the customer base.
Sentiment Analysis, by virtue of its approach, is key for detecting these ‘priority’ IT tickets and is being used to achieve the desired results. The process is carried out without any human interaction, hence the quick detection of ‘unhappy’ or ‘priority’ requests results in a quick turnaround time for resolving the tickets.
IT Ticket comments come with descriptions that are usually short and sometimes precise. The ‘objective descriptions’ of these comments seem inherently negative but are neutral in IT Support context.
A typical example is – “The program is throwing up an error“. This statement does not necessarily emote any sentiment. The challenge lies in ignoring these objective parts of the comment and concentrate on the ‘sentiment’ part expressed in the ‘subjective part’ of the IT Ticket. A typical example for that is – “This is terrible and I am frustrated”.
Segregation based on the above theory entails a complete understanding of the product and service for which the IT tickets are being raised. This understanding enables us to identify the words, phrases, and sentences that are being used to describe the ‘undesirable behavior’ or ‘malfunctioning’ of the product or service.
This, to a layman, appears very straightforward and simple, but in reality, poses a serious challenge in distinguishing between the objective and subjective parts of the issue or comment. Reading on, we have explored the ‘Keyword Based Approach’ of Sentimental Analysis for this task.
An NLTK based library which can assign sentiment polarity scores ranging from -1 (most negative) to +1 (most positive) for pieces of text was used. We first filter comments to remove artifacts like personal details, URLs, email addresses, logs and other metadata since these do no have any sentimental value. The text of the filtered comment is then used in the scoring process.
If any sentence has a non-alphabet content greater than 25%, then it is not taken into consideration while scoring. Those sentences usually do not contribute to the sentiment polarity of the text. This is due to the texts usually not being dictionary words. For example, the snippet of code: C = A + B
We took a granular approach when assigning sentiment scores to text as we specifically want to ignore parts of the text which seem inherently negative but are neutral in the IT support context. The approach we use is to divide each sentence in a comment into ‘n-grams’.
An n-gram is a window with consecutive words. This window is slid over the words in the sentence to identify constituent n-grams. We manually go through a large collection of comments and come up with n-grams which should not be assigned a sentiment value, in the IT support context. Any such n-grams which show up in a sentence are ignored. The remaining n-grams are scored and we take into consideration only n-grams which have a sentiment score that is significantly different from Zero.
Also, if a sentence has less than three words in it, then a sentiment value for that sentence is calculated directly and if that score is significantly different from zero, it is used for calculating sentiment score for the comment. We also manually come up with a list of such sentences that we should ignore.
In adjacent n-grams with overlapping tokens; if their scores haven’t changed much and the score for the common part contributes the vast majority of the n-gram score, then only the first n-gram ’s score is taken into consideration. For example, let us consider the sentence “it is frustrating to have to go over this again.” Here, let us consider 3 consecutive words (trigram), “it is frustrating”, “is frustrating to” and “frustrating to have” all have a sentiment score of -0.4 and the word ‘frustrating’ alone contributes to that score. So, we just take into consideration score for the first trigram above and ignore the other two.
All n-grams which satisfy the conditions mentioned above are collected along with their scores. If there are no such n-grams, a sentiment score of zero is assigned to the comment. If there are any such n-grams, we then check to see if there are one or more n-grams with a score less than or equal to a threshold value. If yes, then we take the mean value of the sentiment value of all n-grams with a sentiment score value less than or equal to the threshold and assign this value as the sentiment score of the comment. This was done as part of an effort to aggressively go after negative comments.
Here, xi represents a sentiment score and the angle brackets denote mean value.
If there are no n-grams with a score of less than or equal to the threshold, then we take the weighted average of all the sentiment values of all the scored n-grams and assign this value as the sentiment value for the comment. The weights are chosen in such a manner that for negative scores the weight is greater than 1 and increases as the score decreases and for positive scores the weight is less than 1 and decreases as the score increases.
Here xi denotes the sentiment score for a trigram and widenotes its weight
We created a gold standard set of comments consisting of a small balanced set of negative and non-negative comments. Here, ‘Precision’, is the fraction of comments that are actually negative out of the comments which are classified as negative. The closer the precision is to 1, the fewer the number of false negatives compared to the number of true negatives. ‘Recall’ is the fraction of negative comments which are classified correctly out of the total number of negative comments. The closer the recall is to 1, the higher the fraction of negative comments which are classified as negative.
Here TN represents true negatives, i.e. comments which are negative and are classified as negative. FN represents false negatives, i.e. comments which are not negative but which are classified as negative. FP represents false positives, i.e. comments which are negative, but are classified as non-negative.
The results are as displayed below
The reason why the precision is so small on the second set is that the vast majority of tickets are non-negative and there will be a certain percentage of these which are wrongly marked as negative by our algorithm. This number is large compared to the number of negative comments which are marked as negative.
We also ran the raw comments through the library we used for Sentiment Analysis and got the following results
So, we can see that our algorithm involving trigrams and assigning scores according to the above-mentioned procedure vastly improves the Sentiment Polarity Prediction process over assigning scores to the raw comments all at once.
We came up with a commendable method to ignore the objective parts of an IT support ticket by coming up with a list of text snippets which do not typically have a sentiment value in the IT support ticket context. The current method of assigning sentiment scores is lexicon based and relies on keywords to which a sentiment score is attached.
This technique was a success in quickly picking up highly negative statements and easily pushing it up the ladder for immediate action, this might not be able to pick up subtle ways of expressing a negative sentiment which a human reader would easily pick up, but nonetheless will be effective in identifying negative IT tickets quickly.