Volume 27, Number 2
Authors: The Case Against Disclosure: Defending Creative Autonomy in the Age of AI, by James Hutson and Daniel Plate (Common Ground Research Network, 2025, 240 pages). ISBNs: 978-1-966214-56-4 (hardback), 978-1-966214-57-1 (paperback), and 978-1-966214-58-8 (eBook).
Authors: Mind, Machine, and Will: Determinism, Responsibility, and Agency in the Age of AI, by Daniel Plate and James Hutson (Nova Science, 2025, 231 pages). ISBNs: 979-8-89530-934-6 (hardback) and 979-8-89530-952-0 (eBook).
Reviewed by: Emily Pickering, Boise State University, USA
From Socrates’ condemnation of the written word to today’s anxiety over generative artificial intelligence, technological changes have consistently disrupted how humans create, communicate, and claim ownership of ideas. In the two books, The Case Against Disclosure and Mind, Machine, and Will, James Hutson and Daniel Plate situate contemporary AI debates within this longer intellectual history. They urge readers to reconsider how meaning, authorship, and agency are understood in an age of hybrid human-machine collaboration. To address complexities in their arguments, the authors draw from multidisciplinary theories and practices. Reviewing the books together highlights their shared call to educators, scholars, artists, and policy makers to rethink the AI practices and policies shaping creative and scholarly life.
In The Case Against Disclosure: Defending Creative Autonomy in the Age of AI (2025), Hutson and Plate argue that in a digital era marked by increasing ease to collectively create works, it is practically untenable for institutions to demand detailed obsessive AI disclosure documents in a misconceived attempt to detail every AI-influenced prompt or revision. This form of disclosure misunderstands the nonlinear nature of creativity. They show how the creative process always resists this bureaucratic control. Exhaustive AI disclosure requirements shift intellectual labour away from genuine intellectual responsibility, substituting it with a new burden of proving authorship. Creativity has never been fully transparent or individually isolated; it is shaped by subconscious influence, collaboration, and cultural inheritance.
A central thread running through the book is the defense of discernment over disclosure. Hutson and Plate do not deny the importance of ethical engagement with AI, but they question whether exhaustive forced procedural AI transparency meaningfully protects authorship. Instead, they argue that authorship hinges on human-intervention judgment, selection, and responsibility during the writing process—even within algorithmic collaboration. Historical examples, from medieval scribal practices to kinetic art and digital media, highlight that co-creative processes and variations in collective ownership of knowledge are longstanding cross-cultural traditions.
Particularly interesting is the authors’ critique of institutional fear. They suggest that AI disclosure regimes may serve less to protect creativity and more to preserve control. In educational settings, plagiarism-detection technologies and rigid policies often disadvantage multilingual and neurodiverse writers, reinforcing narrow norms for originality. This critique raises important questions about how emerging AI policies may shape access, participation, and equity across increasingly networked learning environments. In contrast to this rigidity, the authors advocate for encouraging ethical exploration with AI rather than enforcing suspicion. Responsible AI use, they argue, is demonstrated not through exhaustive disclosure but through intentional integration.
Ultimately, The Case Against Disclosure defends creative autonomy, not as an isolated practice but as the capacity for people to exercise discernment within evolving technological environments. The authors propose a pragmatic framework to ensure accountability in AI use without impeding creativity. They write that their framework “explicitly preserves methodological privacy and intellectual autonomy, even as it maintains legitimate transparency” (p. 160). The framework offers a pathway to reduce bureaucratic friction by creating space and processes for continual dialogue between institutional administrators and content creators who use AI-driven tools.
The book Mind, Machine, and Will: Determinism, Responsibility, and Agency in the Age of AI (2025) advances a bold and philosophically rigorous argument: contemporary debates about artificial intelligence, authorship, and accountability cannot be resolved by defending traditional notions of autonomous free will. Drawing on Wittgensteinian philosophy, contemporary neuroscience, legal theory, and speech act theory, the authors propose a shift from metaphysical individualism toward a practice-based, communal model of agency. Meaning, value, and agency are not grounded in private mental states or in hidden intentions, but in publicly accessible practices. Following Wittgenstein’s critique of private language, the authors argue that rule-following and meaning-making are by nature social activities. Agency, in this account, is not a purely metaphysical property but something constituted within communities whose norms and expectations remain open to revision.
This philosophical shift is especially important given the neuroscientific findings that challenge traditional notions of free will. Research demonstrating that decisions arise from complex causal chains—from neural processes to environmental conditions—undermines the idea that actions are fully self-originating. Rather than dissolving responsibility, however, the authors argue that these findings invite a revision of the concept of accountability. Responsibility now is about participation within processes of reason-giving and justification as opposed to autonomous intentions with no cause.
This evolved mindset has significant implications for generative artificial intelligence. As machine-generated content becomes increasingly indistinguishable from human-generated work, questions of authorship and authenticity cannot be addressed by purely metaphysical claims to creativity. Instead, the authors propose a practice-based model in which responsibility attaches to demonstrated competence and public participation. Transparency, in this view, does not require an audit of every neural or algorithmic prompting. Instead, it requires visible assumption of responsibility within shared institutional frameworks.
One of the book’s most distinguishing contributions is its portrayal of human-AI hybrid authorship. Rather than framing AI as a feared rival to the human mind, the authors position AI as a collaborator in changing communal practices. Copyright systems grounded in human exceptionalism, they argue, are increasingly inadequate. What deserves recognition is not metaphysical originality but meaningful participation in socially recognized creative practices. At the same time, the authors identify some of the weaknesses of overreliance on AI in the creative process. The limitations of artificial intelligence tools highlight the necessity of continuous human oversight and responsive correction. Ethical resilience depends on constructing institutions that support participation, adaptation, and collective learning.
One of the most provocative moves in Mind, Machine, and Will is its challenge to traditional models of blame and responsibility. If no idea is fully self-originating, then legal and ethical systems cannot continue to focus primarily on hidden intention or individual will. Instead, the authors urge a shift toward evaluating how agents—both human and machine—function within shared practices. The authors write that the challenge is “not how to preserve the illusion of individual autonomy, but how to construct public, corrigible practices that are trustworthy in their own right” (p. 89). This may leave readers feeling unsettled at first, yet the authors reassure that pushing past metaphysical nostalgia frees societies to build transparent systems capable of adapting alongside technological change.
Ultimately, this book offers reconstructed ideas of agency, value, and justice in a time of machine intelligence. In doing so, it reframes the AI debate away from panic and prohibition, which are historically commonly attached to technological transformations, and towards institutional redesign. Ethical AI is framed as less about isolating the intention of the agent and more about creating communities of shared responsibility.
Read together, The Case Against Disclosure and Mind, Machine, and Will offer a layered response to contemporary anxieties about AI, creativity, and accountability. While the former focuses on institutional policy and the practical consequences of excessive AI disclosure mandates, the latter takes a deep dive into the philosophical assumptions upon which the mandates are constructed.
Both works challenge the widespread belief that transparency alone guarantees integrity. In The Case Against Disclosure, Hutson and Plate question whether documenting every AI prompt or revision meaningfully preserves authorship. In Mind, Machine, and Will, they extend this critique by arguing that accountability has never depended on full transparency of inner processes but on participation in public demonstrations of reason-giving and shared standards.
The books complement one another in important ways. The Case Against Disclosure reassures readers that human discernment remains central even in human-AI hybrid creative practices. It defends creative autonomy and warns against institutional overreach. Mind, Machine, and Will, however, complicates the very notion of autonomy by questioning whether it has ever been as independent or self-contained as cultural narratives suggest. Rather than abandoning responsibility, the second book reframes it as something publicly enacted within communities and institutions. When read together, the defense of autonomy in the first book is strengthened by the second book’s argument that agency is sustained through shared public practices.
Readers primarily concerned with educational policy, publishing standards, or institutional AI guidelines may find The Case Against Disclosure most immediately applicable. Those interested in the deeper philosophical questions about free will, moral responsibility, and the future of legal systems in a world saturated with AI will likely find Mind, Machine, and Will more engaging. Read together, however, the books provide a more complete understanding of the importance of transparency in creativity as a shared, evolving practice, where content creators are free to adopt or reject AI tools as actors, not passive consumers. Both books are valuable to readers interested in considering what it means for society to use AI.

Book Review: The Case Against Disclosure and Mind, Machine, and Will by Emily Pickering is licensed under a Creative Commons Attribution 4.0 International License.