Technology

The AI Echo: How a Distinctive Sentence Structure Reveals the Growing Influence of Generative AI in Corporate Communications

The proliferation of a particular sentence construction — "It’s not just this — it’s also that" — within corporate communications has emerged as a compelling, almost unmistakable, linguistic indicator of the increasing integration of generative artificial intelligence (AI) into business operations. What began as a subtle stylistic quirk has rapidly escalated into a pervasive rhetorical pattern, drawing the attention of linguistic analysts and market intelligence firms alike. This distinctive phrasing, once a mere stylistic choice, has become a veritable fingerprint of synthetic authorship, suggesting a profound shift in how companies craft their public messages, from earnings reports to government filings.

The observation was first highlighted by a comprehensive report from Barron’s, which meticulously documented a dramatic surge in the use of this specific construction. The financial publication leveraged the extensive database of AlphaSense, a leading market intelligence firm, to quantify the prevalence of this phrase across a vast repository of corporate news releases, quarterly earnings reports, and regulatory submissions. The findings were stark: the usage of "It’s not just X — it’s also Y" has more than quadrupled within a remarkably short period, escalating from approximately 50 mentions in 2023 to over 200 recorded instances in 2025. This exponential growth transcends the realm of mere coincidence, pointing instead to a systemic influence reshaping corporate lexicon.

The Linguistic Fingerprint of AI: Deconstructing the "Not Just… But Also" Phenomenon

The underlying reason for this linguistic phenomenon lies in the very architecture and training methodology of large language models (LLMs) like ChatGPT. These AI systems are trained on colossal datasets of human-generated text, encompassing everything from academic papers and news articles to social media posts and corporate documents. During this training, LLMs learn to identify and replicate common patterns, rhetorical devices, and sentence structures that are statistically prevalent in their training data. The "It’s not just… but also" construction, while a legitimate and effective rhetorical tool for emphasizing nuance and dual importance, appears to be disproportionately favored by these models due to its strong associative patterns within the vast textual corpora they process.

Linguists and AI researchers suggest that LLMs, in their pursuit of generating coherent and "human-like" text, often default to structures that exhibit a high degree of predictability and statistical frequency. This particular construction provides a clear framework for presenting multiple facets of an idea or situation, making it an efficient and frequently adopted template for AI-generated prose. As generative AI tools become more sophisticated and their adoption more widespread, these subtle stylistic predilections become amplified, creating a discernible "AI echo" across various forms of communication. The irony is palpable: the very tools designed to mimic human creativity are inadvertently creating new, predictable patterns that betray their synthetic origin.

A Chronology of Integration: From Novelty to Corporate Standard

The timeline of this linguistic shift closely mirrors the rapid ascent of generative AI technologies in the public and corporate consciousness.

  • Late 2022: The public release of advanced generative AI models, notably OpenAI’s ChatGPT, sparks widespread interest and experimentation. Initial uses are often exploratory, focusing on content generation for marketing, basic summaries, and creative writing.
  • 2023: Businesses begin cautiously integrating AI tools into various departmental workflows. Communications teams experiment with AI for drafting press releases, internal memos, and initial versions of reports. Early adopters note efficiency gains but also grapple with quality control and the distinct "voice" of AI. The "not just… but also" phrase starts to see a noticeable uptick, particularly in less critical, high-volume content.
  • 2024: AI adoption accelerates across industries. Companies invest in custom AI solutions and integrate LLMs into proprietary platforms. The demand for rapid content generation for investor relations, regulatory compliance, and public relations grows, pushing communications teams to rely more heavily on AI for first drafts and ideation. The number of mentions of the characteristic phrase doubles compared to 2023, indicating a more entrenched usage.
  • 2025: Generative AI becomes a standard tool in many corporate communications departments. Sophistication in prompt engineering improves, but the underlying stylistic biases of the models persist. The "not just… but also" construction quadruples from its 2023 baseline, becoming a widespread feature in official corporate documents, often passing through human review without specific linguistic pattern detection. Regulatory bodies and market analysts begin to informally note a shift in corporate language.
  • Early 2026: The Barron’s report formalizes these anecdotal observations with hard data from AlphaSense, unequivocally demonstrating the epidemic-like spread of the AI-preferred phrase. This data serves as a critical turning point, shifting the conversation from speculative observations to data-backed analysis regarding AI’s impact on corporate authenticity.

This chronology underscores not just the increasing technical capability of AI, but also the escalating reliance of corporations on these tools to manage the ever-growing volume and speed of information dissemination.

Beyond "Not Just… But Also": Other AI Linguistic Markers

While the "It’s not just X — it’s also Y" construction is a particularly salient example, it is not the sole linguistic fingerprint of AI-generated text. Other stylistic tells are also being observed and analyzed:

  • Overuse of Em-dashes: The original article briefly mentions em-dashes as another potential indicator. LLMs often employ em-dashes more frequently than human writers, sometimes creating a fragmented or overly emphatic style that can feel unnatural or redundant. This might stem from their training data containing a high frequency of such punctuation in certain formal or academic contexts, which the AI then generalizes.
  • Generic or Vague Adjectives and Adverbs: AI-generated text can sometimes lean on broadly positive but unspecific descriptors (e.g., "robust," "seamless," "innovative," "dynamic") without providing concrete examples or details to substantiate them. This can make the prose feel bland or lacking in genuine insight.
  • Repetitive Phrasing or Sentence Structures: Despite their ability to generate diverse text, LLMs can sometimes fall into repetitive patterns, especially when asked to elaborate on a single theme, leading to a sense of predictability or lack of stylistic variation.
  • Formal but Impersonal Tone: While aiming for professionalism, AI-generated text can sometimes lack the subtle nuances of human empathy, humor, or genuine connection, resulting in a tone that is technically correct but emotionally sterile.
  • Anomalous Word Choices or Collocations: Occasionally, an AI might use a word or phrase that is grammatically correct but feels slightly off in context, or combine words in a way that a human speaker would find unusual. This can be a subtle but revealing sign.

These markers, individually or in combination, contribute to a growing body of evidence that allows for the increasingly accurate identification of AI-assisted writing.

It’s not just one thing — it’s another thing

Implications for Corporate Authenticity, Trust, and Compliance

The widespread adoption of AI-generated language in corporate communications carries significant implications for various stakeholders:

  • Corporate Authenticity and Brand Voice: Companies risk diluting their unique brand voice and appearing less authentic if their communications increasingly adopt generic, AI-driven linguistic patterns. In an era where consumers and investors value transparency and genuine engagement, a perceived lack of human touch could erode trust.
  • Investor Relations and Market Perception: Investors rely on clear, precise, and authentic communication from companies to make informed decisions. If earnings reports or investor calls are perceived as being heavily AI-generated, it could raise questions about the depth of human oversight, the nuance of financial disclosures, and the overall credibility of the information presented. The stakes are particularly high in sensitive financial reporting, where every word can influence market sentiment.
  • Regulatory Compliance and Legal Scrutiny: Government filings and public disclosures are subject to stringent legal requirements for accuracy and completeness. While AI can assist in drafting, the ultimate responsibility lies with human executives. If AI introduces ambiguities, misinterpretations, or omits critical context due to its inherent biases, companies could face significant regulatory penalties or legal challenges. The lack of human intent behind specific phrasing could also complicate legal interpretations.
  • Employment and Skill Sets: The increasing reliance on AI for drafting communications could alter the demand for traditional writing and editing skills within corporations. While AI can handle volume, the need for human professionals capable of refining AI output, injecting strategic nuance, ensuring ethical compliance, and preserving brand voice becomes even more critical.

Expert Perspectives: Balancing Efficiency with Integrity

Communication specialists express a mix of enthusiasm for AI’s efficiency gains and concern over potential pitfalls. "Generative AI is an incredibly powerful tool for accelerating content creation," states Dr. Eleanor Vance, a professor of corporate communications. "However, the data from AlphaSense and Barron’s highlights a critical challenge: the risk of homogenizing corporate language. Companies must strive to maintain a distinctive voice, even with AI assistance. The goal should be augmentation, not replacement, of human creativity and strategic thinking."

AI ethicists, such as Professor Kenji Tanaka, emphasize the importance of transparency. "The public has a right to know if the information they are consuming, especially from influential corporate entities, is human-authored or AI-assisted. While direct disclosure might be impractical for every sentence, companies need to consider broader ethical guidelines around AI use in public-facing communications to maintain trust and accountability."

Legal experts echo concerns about accountability. "While AI can draft, it cannot take legal responsibility," comments Sarah Chen, a corporate lawyer specializing in disclosure. "Any document, whether a press release or an SEC filing, is ultimately attributed to the company and its officers. If AI introduces errors or problematic phrasing, the liability rests squarely on human shoulders. This necessitates robust human review processes and clear internal policies on AI usage."

Challenges and the Path Forward

The emergence of AI-driven linguistic patterns presents a dual challenge for corporations: how to leverage the undeniable efficiencies of generative AI while safeguarding authenticity and integrity.

  • Developing AI Detection Tools: As AI writing becomes more sophisticated, so too must the tools designed to detect it. Companies, regulators, and news organizations may increasingly deploy AI detection software not just to identify plagiarism, but also to understand the extent of AI influence in corporate messaging.
  • Implementing Robust Human Oversight: The role of human editors, communication strategists, and legal reviewers becomes paramount. Their task evolves from drafting from scratch to critically evaluating, refining, and injecting human nuance into AI-generated content. This requires a deeper understanding of AI’s capabilities and limitations.
  • Establishing Ethical AI Guidelines: Corporations need to develop clear internal policies regarding the use of generative AI in communications. These guidelines should address issues such as:
    • When is AI assistance appropriate?
    • What level of human review is required for AI-generated content?
    • How to maintain brand voice and avoid generic AI language.
    • Consideration of disclosure for heavily AI-generated content in sensitive areas.
  • Training and Education: Employees in communications, legal, and investor relations departments require training on effective prompt engineering, critical evaluation of AI output, and the ethical implications of using AI in their roles.

The Future of Business Communication: A Hybrid Landscape

The data from AlphaSense and the subsequent analysis by Barron’s serve as a powerful signal in the ongoing evolution of corporate communication. The "It’s not just this — it’s also that" phenomenon is more than a fleeting stylistic trend; it is a symptom of a profound technological shift. As generative AI continues to mature, its linguistic fingerprints will likely become more subtle and harder to detect, prompting a continuous arms race between AI generation and AI detection.

The future of business communication will undoubtedly be a hybrid landscape, where human ingenuity and AI efficiency intertwine. The challenge for corporations will be to harness AI’s power to enhance, rather than diminish, the clarity, authenticity, and strategic impact of their messages. The current "epidemic" of AI-preferred phrasing serves as a crucial reminder that while AI can replicate, it is the human element that ultimately imbues communication with genuine meaning, trust, and purpose. Companies that navigate this evolving terrain successfully will be those that master the art of integrating AI as a strategic partner, ensuring that the voice resonating through their communications remains authentically their own.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button