You are falling into the big trap that I constantly see people stumble into: Thinking you can blend into the masses.
I work in AI research at my uni and I am actively interested in the entire topic, coding up my own experiments in my free time and so on. Please, let me guarantee you that there is no way for you to blend in just because a lot of other people also search for the same thing.
How to find a needle in 50 million sticks of hay has been figured out quite well even in the time before large scale computerization. It was introduced here to fight the RAF in the 70s, it was called Rasterfahndung
and it worked.
The pit fall here is the following principle: Humans categorize into guilty
and not guilty
when they think about their behaviour in relation to their authority (neutral term for any possible state, try not to interpret what I say as agitation but rather scientific excourse). But when you train a neural network in AI research, you quickly learn that grouping into A or B
(eg guilty/not guilty) you aren't extracting even a fraction of the information that's in your data. Instead of two categories, which human brains favour, we can have arbitrary many labels. For example: Has searched for 'Waco', has also searched for 'Texas', has also searched for 'firearm', has also searched for 'control', has also searched for 'restriction', ..., has also searched for 'diaper'.
There will be someone who has searched for the exact same 50 terms as you, except 'diaper'. You will not be labelled in the same category. The reason is that nobody has to make up categories that mean anything to a human mind, instead the neural nets are creating their own categories on the fly as soon as they notice a significant clustering happening.
Look into algorithms like k-means and DBScan for the simple, algorithmic, non-ML version. These algorithms are combined with ML algorithms like random forest etc and they again are combined with neural networks.
Here is one major crux: Humans may be sitting there but they don't know that the neural net has actually learned. There is no way for neural networks to explain why they have learned something in a certain way, they can not explain themselves
. People hear the term 'data analyst' and think that in the end there is some person evaluating if you are a terrorist or a housewife, but that's more like wishful thinking. People working with these systems tend to eventually just trust that the neural net is right and agree with it more and more. Guilty as charged, it costs us so much time in our research to find out if the thing that is obvisouly correctly predicted is learned
and not memorized
or if the net has even learned what we think it has learned or if there is something else in the data that coincidentally produces the same prediction on this particular set of data
but will predict the opposite on some theoretically possible other set of data.
tl;dr: You can never hope to "blend in" when you behave like others because of the advances in graph theory, linear algebra, "probably approximately correct"-theory and the false sense that your intuition has any significance.