The Generative AI Revolution: Key Legal Considerations for the Hospitality Industry

For better or worse, generative Artificial Intelligence (AI) is already transforming the way we live and work. Within two months of its initial release to the public, ChatGPT reached 100 million monthly active users, making it the fastest-growing consumer application in history. Other popular generative AI tools such as Github Copilot, DALL-E, HarmonAI, and Runway offer powerful tools that can generate computer code, images, songs, and videos, respectively, with limited human involvement. The implications are immense and have already sparked calls for new federal regulatory agencies, a pause on AI development, and even concerns about extinction.

Off

This alert analyzes how AI is already affecting the Hospitality industry, as well as some of the key legal considerations that may shape the future of generative AI (GenAI) tools. And click here to watch our latest Fox Forum as we talk with Mike Pell, the visionary innovation leader at Microsoft, a principal investor in OpenAI and the trailblazing company behind the creation of ChatGPT.

The hospitality industry is in its initial phase of adopting AI, and it is already clear that AI has the potential to revolutionize many aspects of hotel operations and customer experience. The industry is now focusing on how to use AI to improve customer experience, automate repetitive tasks, create operational efficiencies, and enhance brand awareness and customer loyalty.

There are many possible applications of how GenAI could affect hotel operations. A GenAI chatbot could take a guest’s room service order or serve as virtual receptionist that could not only fully automate check-in and check-out, but also use a “semantic search” function to answer guest questions, such as, “Where is the best place for coffee near here?” The GenAI chatbot could answer this question by querying a database of options and using the AI technology to find the most similar answer. Another application for GenAI is AI Agents — essentially, the AI is asked to make tasks for itself to complete and is given the ability to interact with a computer to execute those tasks. AI Agents could also be used in a fashion similar to the virtual receptionist mentioned above. Additionally, AI could be a tool to create efficiencies in inventory management, housekeeping room assignments, and maintenance through the use of smart building systems.

At the recent New York University Hospitality Conference, Tim Hentschel, HotelPlanner CEO, provided examples of both positive and negative customer experiences with AI chatbots being used to change reservations. The key difference between experiences was the customer had a more positive experience when a human employee assisted the customer using AI to reduce the wait time versus AI being the sole interface, which can be a frustrating experience. The AI-human hybrid model is much more reliable in the short term than just AI. The risks of hallucinations from GenAI (i.e., when the model creates its own “facts,” such as the fake court citations and caselaw references that have recently been in the news) are significant for all industries, including hospitality. Thus, the most likely practical short-term application for GenAI will be to significantly increase the response time for customer support, an analysis which a human employee can then use to provide faster and informed recommendations to hotel customers.

Industry leaders have also discussed using AI to enhance hotel revenue management. By using dynamic pricing models that share information across various assets, hotel managers can optimize prices and bookings to maximize revenue using AI that assesses a variety of factors in real time, such as demand, peak usage, and occupancy rates. Additionally, AI has the potential to personalize pricing to individual guests based upon their past behavior and demographics and identify opportunities for upselling and cross-selling. These features, however, are not without legal risk. Any use of these features that is collusive or otherwise results in price-fixing or discrimination could open the door to a lawsuit.

Intercontinental Hotel Group has recently partnered with Winnow Solutions, with the goal of using AI to reduce the chain’s food waste by up to 30%. By connecting the waste bins and inventory systems to AI, hotels should be able to more efficiently and accurately record how quickly and frequently certain items are discarded. Hotel kitchens can use this information to adjust future buying decisions, menus, and food preparation techniques.

Finally, AI may be helpful in assisting hotel owners in analyzing guest feedback and social media posts and providing suggested responses that can be reviewed and edited by hotel staff.  AI can also track and analyze guests’ booking behaviors, which could assist hotels in creating personalized marketing campaigns targeted at certain customers.

AI is already impacting the guest experience and hospitality companies should consider the legal issues outlined below when deciding how best to use AI in their business.

1. Accuracy and Reliability

For all their well-deserved accolades and hype, GenAI tools remain a work in progress. Users, especially commercial enterprises, should never assume that AI-created works are accurate, non-infringing, or fit for commercial use. In fact, there have been numerous recorded instances in which GenAI tools have created works that arguably infringe the copyrights of existing works, make up facts, or cite phantom sources. It is also important to note that works created by GenAI may incorporate or display third-party trademarks or celebrity likenesses, which generally cannot be used for commercial purposes without appropriate rights or permissions. Like anything else, companies should carefully vet any content produced by GenAI before using it for commercial purposes.

2. Data Security and Confidentiality

Before utilizing GenAI tools, companies should consider whether the specific tools adhere to internal data security and confidentiality standards. Like any third-party software, the security and data processing practices for these tools vary. Some tools may store and use prompts and other information submitted by users. For instance, when you use a GenAI tool, like ChatGPT, you need to send text to the model to get text back (in this instance). OpenAI will keep your prompt unless you tell them otherwise / use the enterprise version. There are a bunch of statistics showing that a lot of confidential information and customer data are being leaked in these prompts by accident from employees who don’t know better. It is important to understand how to use GenAI tools so you don’t release confidential information. Other tools offer assurances that prompts and other information will be deleted or anonymized. Enterprise AI solutions, such as Azure’s OpenAI Service, can also potentially help reduce privacy and data security risks by offering access to popular tools like ChatGPT, DALL-E, Codex, and more within the data security and confidentiality parameters required by the enterprise.

Before authorizing the use of GenAI tools, organizations and their legal counsel should (1) carefully review the applicable terms of use; (2) inquire about access to tools or features that may offer enhanced privacy, security, or confidentiality; and (3) consider whether to limit or restrict access on company networks to any tools that do not satisfy company data security or confidentiality requirements.

3. Software Development and Open-Source Software

One of the most popular use cases for GenAI has been computer coding and software development. But the proliferation of AI tools like GitHub Copilot, as well as a pending lawsuit against its developers, has raised a number of questions for legal counsel about whether use of such tools could expose companies to legal claims or license obligations.

These concerns stem in part from the use of open-source code libraries in the data sets for Copilot and similar tools. While open-source code is generally freely available for use, that does not mean that it may be used without condition or limitation. In fact, open-source code licenses typically impose a variety of obligations on individuals and entities that incorporate open-source code into their works. This may include requiring an attribution notice in the derivative work, providing access to source code, and/or requiring that the derivative work be made available on the same terms as the open-source code.

Many companies, particularly those that develop valuable software products, cannot risk having open-source code inadvertently included in their proprietary products or inadvertently disclosing proprietary code through insecure GenAI coding tools. That said, some AI developers are now providing tools that allow coders to exclude AI-generated code that matches code in large public repositories (in other words, making sure the AI assistant is not directly copying other public code), which would reduce the likelihood of an infringement claim or inclusion of open-source code. As with other AI generated content, users should proceed cautiously, while carefully reviewing and testing AI-contributed code.

4. Content Creation and Fair Compensation

In a recent interview, Billy Corgan, the lead singer of Smashing Pumpkins, predicted that “AI will change music forever” because once young artists figure out they can use GenAI tools to create new music, they won’t spend 10,000 hours in a basement the way he did. The same could be said for photography, visual art, writing, and other forms of creative expression.

This challenge to the notion of human authorship has ethical and legal implications. For example, GenAI tools have the potential to significantly undermine the intellectual property (IP) royalty and licensing regimes that are intended to ensure human creators are fairly compensated for their work. Consider the recent example of the viral song, “Heart on My Sleeve,” which sounded like a collaboration between Drake and the Weeknd, but was in fact created entirely by AI. Before being removed from streaming services, the song racked up millions of plays — potentially depriving the real artists of royalties they would otherwise have earned from plays of their copyrighted songs. In response, some have suggested that human artists should be compensated when GenAI tools create works that mimic or are closely inspired by copyrighted works and/or that artists should be compensated if their works are used to train the large language models that make GenAI possible. Others have suggested that works should be clearly labeled if they are created by GenAI, so as to distinguish works created by humans from those created by machine.

5. Intellectual Property Protection and Enforcement

Content produced without significant human control and involvement is not protectable by US copyright or patent laws, creating a new orphan class of works with no human author and potentially no usage restrictions. That said, one key principle can go a long way to mitigating IP risk: GenAI tools should aid human creation, not replace it. Provided that GenAI tools are used merely to help with drafting or the creative process, then it is more likely that the resulting work product will be protectable under copyright or patent laws. In contrast, asking GenAI tools to create a finished work product, such as asking it to draft an entire legal brief, will likely deprive the final work product of protection under IP laws, not to mention the professional responsibility and ethical implications.

6. Labor and Employment

When the Writers Guild of America recently went on strike, one issue in particular generated headlines: a demand by the union to regulate the use of AI on union projects, including prohibiting AI from writing or re-writing literary material; prohibiting its use as source material; and prohibiting the use of union content to train large AI language models. These demands are likely to presage future battles to maintain the primacy of human labor over cheaper or more efficient AI alternatives. Meanwhile, the Equal Employment Opportunity Commission (EEOC) is warning companies about the potential adverse impacts of using AI in employment decisions.

7. Future Regulation

Earlier this year, Italy became the first Western country to ban ChatGPT, but it may not be the last. In the US legislators and prominent industry voices have called for proactive federal regulation, including the creation of a new federal agency that would be responsible for evaluating and licensing new AI technology. Others have suggested creating a federal private right of action that would make it easier for consumers to sue AI developers for harm they create. Whether US legislators and regulators can overcome partisan divisions and enact a comprehensive framework seems unlikely, but as is becoming increasingly clear, these are unprecedented times.

If you have questions about any of these issues or want to plan ahead, contact one of the authors or a member of our AI, Metaverse & Blockchain industry team.

Contacts

Continue Reading