Recently, Meta, the parent company of Facebook, Instagram, WhatsApp, and Messenger, announced a bold and controversial move: it will provide the U.S. government with access to its artificial intelligence (AI) model, Llama. This decision grants national security and defense agencies, as well as their private-sector collaborators, the ability to work with Llama’s advanced AI capabilities. On the surface, Meta frames this as a way to support “responsible and ethical” use that bolsters U.S. security and prosperity. However, the decision raises pressing ethical questions for users, privacy advocates, and the general public, bringing to light a dilemma about the responsibilities and limits of technology companies when handling open-source AI.