• Breaking News

    martes, 11 de abril de 2017

    Patent 1 of 2: How Google learns to influence and control users

    Two patents were recently granted to Google that paint a very interesting picture of the future of search. These patents are:

    Before we dive in, I feel it’s necessary to point out that having a patent granted does not mean that Google will be implementing everything (or even anything) contained therein; however, it does illustrate areas they’re considering and willing to invest significant time and money into protecting.

    That said, these two patents contain ideas and technologies that strongly reflect the direction Google is currently heading in, and they point to a stronger monetization strategy on mobile and voice-first devices.

    In this two-part series, I will take you through some of the key points of each patent. Then, I’ll talk about how the information from these to patents combines to paint a picture of a future where search and task completion is very different than it is today.

    In this article, I’m going to focus on the first patent, “Detecting and correcting potential errors in user behavior.” To get a feel for the full scope of what Google is getting at here, I’ve highlighted excerpts from key sections of the patent, followed by my assessment of those sections.

    Abstract

    A computing system is described that predicts a future action to be taken by a user of a computing device and determines, based on contextual information associated with the computing device, a current action being taken by the user. The computing system determines, based on the current action, a degree of likelihood of whether the user will be able to take the future action and predicts, based on the degree of likelihood, that the user will not be able to take the future action. The computing system sends information to the computing device indicating that the current action being taken by the user will lead to the user not being able to take the future action.

    In the abstract, we understand that Google intends to take into account what we are presently doing and place that in the context of tasks we are likely to do in the future. If a current action is likely to interfere with an expected future task, then the user is notified via their device. The abstract doesn’t include what will happen when that event occurs, so we’ll just have to keep reading.

    Background

    Some computing devices (e.g., a wearable device or a mobile phone) may function as personal assistants that are configured to execute search queries for, or notify users about, upcoming flights, nearby attractions, and other information that may be of interest to a user. For example, a computing device may have access to a digital calendar of a user and the computing device may alert the user when to begin traveling from a current location to arrive on-time to a future meeting or event. Or in another example, a computing device may have access to a shopping history of a user and the computing device may suggest certain products or services that would work with a product purchased by the user in the past, as opposed to other products and services that would be incompatible with a past purchase. Nevertheless, even with all the helpful reminders and access to information that some computing devices provide, the reminders and access to information may not always prevent individual users from making decisions and taking certain actions that lead to mistakes being made in their everyday lives.

    The background of a patent basically outlines its “why,” or what problem it is attempting to solve. In the background here, we see Google acknowledge that, while today’s devices can alert users to upcoming events they may need to attend or recommend products based on past purchase history (I wonder why that’s specifically mentioned, don’t you?), the current technologies aren’t proactive in correcting users when their actions are not compatible with these events or needs.

    Summary

    Section 2

    … the disclosure is directed to a method that includes predicting, by a computing system, a future action to be taken by a user of a computing device, determining, by the computing system, based on contextual information associated with the computing device, a current action being taken by the user, determining, by the computing system, based on the current action, a degree of likelihood of whether the user will be able to take the future action, and predicting, by the computing system, based on the degree of likelihood, that the user will not be able to take the future action. The method further includes sending, from the computing system, to the computing device, information indicating that the current action being taken by the user will lead to the user not being able to take the future action.

    While there are four parts to the summary it is in Section 2 that we get the picture of what Google is accomplishing with this patent.  Building on the background that the system they are describing will detect when actions being taken are likely to prevent future actions the user is expected to take, here we see that Google will send to the user an indication that their current actions will prevent future ones.

    This might not seem altogether interesting; however, as we dive further in, we’ll start to understand the full reach, influence and power that they are talking about.

    Detailed description

    You’re going to see some numeric references below, such as “device 110.” These numbers relate to the figures included with the patent.  While most aren’t specifically relevant to understanding what is being discussed (in our context), I don’t want you feeling like you’re missing anything, so I’ll include the figures above where they’re first referenced. Let’s begin with figure 1.

    Figure 1 Of Patent: Detecting and correcting potential errors in user behavior

    Now let’s get back to what the patent covers …

    Section 25

    For example, notification data may include, but is not limited to, information specifying an event such as: the receipt of a communication message (e.g., e-mail, instant message, SMS, etc.) by a messaging account associated with a computing device, the receipt of information by a social networking account associated with computing device 110, a reminder of a calendar event (meetings, appointments, etc.) associated with a calendar account of computer device 110, information generated and/or received by a third-party application executing at computing device 110, the transmittal and/or receipt of inter-component communications between two or more components of platforms, applications, and/or services executing at computing device 110, etc.

    The sections between 2 and 25, while interesting, reinforced the points note previously. In section 25, however, we get a bit of interesting information. Google mentions the use of communication systems built into the device, as well as social network account data and other third-party applications and services on the devices accessed by the user. Basically, the patent is built on the idea that all data from virtually any source can be used to determine expected actions a user is likely to take.

    Section 31

    Examples of contextual information include past, current, and future physical locations, degrees of movement, magnitudes of change associated with movement, weather conditions, traffic conditions, patterns of travel, patterns of movement, application usage, calendar information, purchase histories, Internet browsing histories, and the like … In some examples, contextual information may include communication information such as information derived from e-mail messages, text messages, voice mail messages or voice conversations, calendar entries, task lists, social media network related information, and any other information about a user or computing device that can support a determination of a user context.

    In discussing understanding the context of actions, Google adds in section 31 the idea of taking into account the motion of the user’s device and external factors such as weather and purchase histories (there it is again). Once more, we see wide varieties of contextual data sources used, but here we also see the inclusion of voice conversation and voice mail.

    Privacy concerns aside, that’s a lot of data — especially when we consider that “voice conversations” doesn’t necessarily mean “on the phone” simply because it’s listed right after voice mail.

    Section 34

    Context module 162 may maintain past and future contextual histories associated with the user of computing device 110. Context module 162 may catalog and record previous contexts of computing device 110 at various locations and times in the past and from the previously recorded contexts, may project or infer future contexts of computing device 110 at various future locations and future times. Context module 162 may associate future days and future times with the recurring contexts of prior days and times to build a future contextual history associated with the user of computing device 110.

    In section 34, we’re seeing an expansion of the ideas presented previously into the unscheduled world. Until this point in the patent, we’ve generally read Google determining future events using data gathered from various sources (social media, text, voice, etc.). Here, however, we’re seeing this expanded to the system gaining an understanding of patterns based on past and present behavior — and establishing from that what the user is expected to do.

    Section 36

    Context module 162 may supplement a future contextual history associated with the user of computing device 110 with information stored on an electronic calendar or information mined from other communications information associated with computing device 110. For example, the electronic calendar may include a location associated with an event or appointment occurring at a future time or day when the user is typically at a home location. Rather than include the home location during the future time or day of the event as the expected location in the future contextual history, context module 162 may include event location as the expected location during the future time or day of the event.

    In Section 36, we see the bridge of user behavior established by “watching” our patterns from Section 34 and combining that with the event and calendar information mined from other sources to override habitual patterns with known upcoming events that may disrupt our daily activity.

    Section 39

    As part of a prediction service accessed by computing device 110, prediction module 164 may automatically output notification data or other information to notification module 122 of computing device 110 for alerting a user of a correction that the user could take to a current action as a way to ensure that a future action is taken. Said differently, prediction module 164 may determine whether a current action being performed by a user in a current context is more likely or less likely to lead to the user being able to perform a future action in a future context.

    In Section 39, we simply see the output: Google notifying the user that an action they are taking now may impact the ability to perform a future action.

    Section 40

    In summary, prediction module 164 may perform one or more machine learning techniques to learn and model actions that users of computing device 110 and other computing devices typically take for different contexts. Through learning and modeling actions for different contexts, prediction module 164 may generate one or more rules for predicting actions that the user of computing device 110 takes for a particular context.

    In Section 40, the patent discusses the use of machine learning and modeling to determine and predict possible actions being performed. In other sections of the patent, we read such examples as using the motion and location of a device to determine the user is standing in line at an airport (and what the context of that action would predict).

    In Section 44, we see a very helpful example: the system detecting that the user has overslept when they have a flight and thus alerting them to this fact before it’s too late to catch it, given current conditions (time to airport, time to get through the gates, etc.).

    Section 53

    [U]nlike other computing devices and systems that simply provide reminders and access to information but may still allow a user to go in a direction or to take an action that could lead to a mistake being made in their everyday lives, the example system takes additional steps to ensure that mistakes are avoided and the future actions associated with the reminders and information are actually accomplished. Even if the user is unaware that he or she is making a mistake, the example system will still automatically provide information to guide the user back to avoid making the mistake.

    Here, we see something fairly straightforward but that needs to be illustrated as we approach Section 54. The system is designed to actively provide information that will allow a user to avoid making a mistake, even if they are unaware of said mistake to begin with. A very pleasant idea, to be sure, but here’s where it gets dangerous…

    Section 54

    Consequently, the user need not even be aware he or she is making a mistake, the computing system can infer whether the user may need to take corrective action to a current action without user input. The user may experience less stress and spend less time going down the wrong path towards a future action and less time correcting course to avoid making a mistake. By correcting potential mistakes earlier rather than later, the example system may enable the computing device to receive fewer inputs from a user searching for information to try and correct or rectify a mistake.

    It’s the first line of this section that might be considered dangerous. The patent suggests that the system allows for the “correcting of behavior” without the need for the end user to even be aware that they are being corrected. Let that sink in for a second: you do not need to be aware that a system controlled by Google is adjusting your actions without your knowledge, based on what it determines you are or should be doing.

    Before we continue, there will be references to elements on Figure 2. An understanding of these figures is not necessary for our purposes here but may be helpful to some, or at least help you know you’re not missing anything. So, before we continue, here’s Figure 2:

    Figure 2 Of Patent: Detecting and correcting potential errors in user behavior

    Section 66

    Task and needs rules data store 270C includes one or more previously developed rules that needs prediction module 264 relies on to predict a task or action likely to be performed by a user of a computing device for a current context as well as information that the user may need to accomplish the task. For example, data store 270C may store rules of a machine learning or artificial intelligence system of needs prediction module 264. The machine learning or artificial intelligence system of needs prediction module 264 may access the rules of data store 270C to infer tasks and needs associated with users of computing device 110 for a particular context.

    Section 66 is fairly straightforward: the patent reads that machine learning and/or AI will be used to determine the most likely tasks being performed based on past models of behavior and to weight that against their impact on performing a future task. This includes establishing likely tasks and needs associated with that future event to determine if the user has completed all things that need to be completed prior to engaging in that future event. This may include things like stopping at the gas station to fill up the night before a flight when the flight is scheduled to take off at 6:30 in the morning.

    It’s worth noting that in other sections not included here, these machine learning and AI systems are constantly training in understanding global patterns, as well as patterns unique to individuals.

    Section 72

    [P]rediction module 264 may input the current context, previously determined future context, current action, and expected or future action into action rules data store 270C and receive an indication of a degree of likelihood as to whether the user should be able to drop of dry cleaning he or she will need to attend the event the next day and still be able to attend the event the next day. … action rules data store 270C may output a low degree of likelihood (e.g., a probability less than 50%) since the dry cleaner is closed the next day and therefore the user will be unable to pickup his or her dry cleaning for the event.

    Here’s my favorite part and where it all comes together for Google. In this section, we see a user dropping off dry cleaning, and the system has predicted it is for an event the user has the next day. Knowing the dry cleaner is closed the following day, Google employs the corrective action of suggesting a nearby dry clearer that will be open the following day. I imagine that suggestion, which would likely be auditory, would sound something like:

    Google Assistant: The dry cleaner you are arriving at will be closed tomorrow. Is this for the event tomorrow night?

    User: Yes it is.

    Google Assistant: Billy’s Dry Cleaning is two blocks away. Would you like me to enter that as your new destination? (Insert a subtle notice on the display the user isn’t actually looking at because they’re driving, indicating Billy’s Dry Cleaning has paid for the ad.)

    User: Yes.

    I don’t know that they would actually hide the ad notification, obviously, but this is one of the areas where Google could easily influence buying decisions based on AdWords investment.

    Section 89

    ISS 160 may determine that the user is interacting with computing device 110 to purchase tickets for one event that will overlap and prevent attendance of another event for which the user has already purchased tickets and cause computing device 110 to warn the user of the potential mistake. ISS 160 may be aware of more than just start and stop times of events for determining potential conflicts. For example, the rules of prediction module 164 may learn from prior observations of other users of other computing devices that even though the start time of an event is at a particular time, reserved seating for the event opens up fifteen minutes before the actual start time, and if attendees are not at the event fifteen minutes early, the reserved seats open up to other event goers. So that even if the official start and end times of two events do not overlap, ISS 160 may warn a user of computing device 110 of a potential conflict in booking tickets for the two events if the end time of the earlier event overlaps with the “unofficial” start time (e.g., the reserved seating time) of the event.

    This section is incredibly important in function and utility. In this section, we see the system able to understand the less formal aspects of human life, such as unofficial times to be at events. I presume this would include considering the average time it takes to get through customs on arrival at LAX to have an Uber waiting for me as I’m leaving, but not sitting there for 30 minutes because it was ordered based on the time my plane was scheduled to land.

    The stage is set

    In the patent summary above, we’ve set the stage. In part two (to be published next week), we’ll be looking at a second patent that involves guided purchasing. That is, Google predicting the type of information you’ll need to make a purchase based on general user behavior combined with your own personal patterns and past purchases and “guiding” you to the “right decision.” It’s an extremely exciting patent for paid search marketers — as implemented by Google, it would add enormous control over your ads and how and when they’re triggered.

    But before going there, we needed to understand this first patent and how it’s designed to influence users’ core behavior. Soon, we’ll see how this all comes together into one glorious monetization strategy for Google, with more ads displayed and more ads selected by users at critical decision-making points.

    The post Patent 1 of 2: How Google learns to influence and control users appeared first on Search Engine Land.



    No hay comentarios:

    Publicar un comentario

    Link Recomendados

    Beauty

    Travel