AI's Original Sin: Copyright Confusion at the Core of Machine Learning
In boardrooms from New York to Silicon Valley, a troubling pattern emerges whenever executives discuss their AI strategies. The conversation inevitably stalls on one fundamental question: "What data is actually in our models?" This uncertainty represents more than a technical curiosity as it's become the defining business challenge of the AI era.
Recent research reveals the scope of this challenge. MIT's Data Provenance Initiative systematically audited 1,800 public AI training datasets, discovering that 68% lack proper licensing documentation while 50% mischaracterize their contents entirely. For business leaders, this translates into what risk management professionals recognize as "unknown unknowns", exposures they cannot quantify, manage, or price effectively.
This data opacity crisis demands strategic attention from professionals and enterprise executives alike. While the technical capabilities of artificial intelligence continue to advance rapidly, the fundamental question of training data provenance threatens to undermine long-term AI adoption across industries.
Understanding the Business Impact
The challenge extends far beyond legal compliance. Organizations deploying AI systems inherit the copyright and intellectual property risks embedded in their training data, creating liability concerns that traditional risk frameworks struggle to address. When companies use AI tools for content creation, customer service, or strategic analysis, they potentially expose themselves to copyright infringement claims based on data decisions made by third-party AI developers.
This uncertainty manifests in measurable business costs. Corporate legal departments report significant increases in AI-related risk assessments, while insurance companies exclude AI-generated copyright claims from standard policies. Procurement teams now demand extensive indemnification clauses from AI vendors, protections that may prove inadequate given the scale of potential exposure.
Perhaps most significantly, the uncertainty creates a negative feedback loop that limits AI model quality. Enterprise executives report that copyright concerns discourage data sharing initiatives, reducing the diversity and quality of training datasets while paradoxically increasing legal risk through greater reliance on potentially problematic public datasets.
The Technical Root of the Problem
Unlike traditional software where dependencies can be explicitly managed through version control systems, machine learning models embed training data in ways that make retrospective analysis extraordinarily difficult. During training, neural networks process millions of data points through multiple transformation layers, creating emergent representations that combine elements from the entire training corpus.
This fundamental architecture makes it virtually impossible to determine whether specific copyrighted content influenced model outputs without comprehensive provenance tracking from the initial development stages. The computational efficiency that makes large language models powerful also makes them opaque to post-hoc analysis.
Federal research agencies have begun documenting this technical challenge systematically. The National Institute of Standards and Technology (NIST) AI Risk Management Framework requires "end-to-end lineage evidence" that traces data from initial collection through all processing stages to final model deployment. Current industry practices fall substantially short of these requirements, creating compliance gaps that extend across the enterprise AI landscape.
Emerging Legal Landscape
The legal environment surrounding AI training data continues to evolve rapidly, creating additional uncertainty for business planning. Recent court decisions have produced mixed results that offer little clarity for enterprise risk assessment. Some courts have found that AI training constitutes fair use under copyright law, particularly when the training process is "exceedingly transformative" and when companies have legally purchased training content.
However, other decisions have reached opposite conclusions, particularly when AI systems compete directly with the original content creators. The Delaware federal court's decision in Thomson Reuters v. Ross Intelligence found that AI legal research tools' use of copyrighted legal databases was not fair use because it created competing products serving the same market.
These cases highlight the fundamental uncertainty facing enterprise AI adoption. Organizations cannot reliably predict whether their AI implementations will face successful copyright challenges, making strategic planning and risk management extraordinarily difficult.
Strategic Solutions for Enterprise Leaders
Despite these challenges, practical solutions are emerging that enable organizations to implement AI systems while managing copyright and provenance risks effectively. Leading enterprises are developing comprehensive data governance frameworks that combine technical solutions with organizational processes to create verifiable audit trails throughout the AI development lifecycle.
Automated Provenance Tracking represents the foundation of effective solutions. Modern systems use blockchain-based technologies to create immutable records of all data transformations throughout AI development pipelines. Each processing step, from initial collection through preprocessing, training, and deployment - generates cryptographically signed entries that provide tamper-evident documentation while enabling collaborative development across multiple organizations.
Source-Labeling Systems embed machine-readable metadata directly into training data using emerging standards like C2PA (Coalition for Content Provenance and Authenticity). These persistent identifiers specify data origins, ownership rights, and usage permissions, enabling automated processing systems to make real-time decisions about data usage compliance.
Automated Copyright Detection provides practical tools for implementing real-time content filtering during model training. Advanced systems maintain comprehensive databases of known copyrighted content, comparing training data against these references using neural networks trained specifically for copyright detection, achieving accuracy rates exceeding 95% for text, image, and audio content.
Building Competitive Advantage Through Compliance
Organizations implementing comprehensive data governance systems report significant advantages beyond mere compliance benefits. Systematic provenance tracking improves model performance through better data quality management, reduces development costs through automated compliance checking, and enhances competitive positioning through demonstrated ethical AI practices.
Early research suggests that proactive provenance implementation reduces total development costs by 15-30% compared to reactive compliance approaches. These savings result from decreased legal review requirements, reduced model retraining needs, and streamlined partnership negotiations with data providers.
Market analysis indicates that provenance-verified AI systems command premium pricing in enterprise markets, where customers increasingly prioritize legal and ethical compliance in vendor selection decisions. This trend suggests that comprehensive data governance may become a significant competitive differentiator rather than merely a regulatory burden.
Implementation Roadmap for Organizations
Successful implementation requires a systematic approach that addresses both immediate compliance needs and long-term sustainability requirements. Organizations should begin with comprehensive data asset inventories that identify all training datasets, their sources, licensing terms, and current usage patterns.
The technical infrastructure should combine blockchain-based provenance tracking with automated copyright detection and privacy-preserving training methodologies. This foundation provides the computational capabilities necessary for comprehensive data governance at enterprise scale.
Governance processes must integrate standardized documentation frameworks into all AI development workflows. Dataset cards and model cards provide structured formats for recording comprehensive provenance information, while version control systems track changes to both datasets and documentation throughout the development process.
Third-party auditing protocols enable independent verification of provenance claims through standardized assessment methodologies developed by organizations like NIST and academic research institutions. These protocols provide objective frameworks for evaluating data governance practices without requiring access to proprietary information.
Strategic Recommendations
Data provenance should be treated as a foundational requirement rather than an afterthought. Organizations that proactively address these challenges will be positioned to capture AI's competitive advantages while avoiding the legal and reputational risks that threaten reactive adopters.
The investment in comprehensive data governance systems pays dividends beyond compliance. Organizations with robust provenance tracking report improved stakeholder confidence, enhanced partnership opportunities with content creators, and differentiated market positioning based on ethical AI practices.
The regulatory environment will continue tightening, with agencies moving toward mandatory transparency requirements and comprehensive disclosure mandates. Organizations that invest in governance infrastructure now will avoid the disruption and higher costs associated with retrofitting existing AI systems to meet future regulatory requirements.
The Path Forward
The resolution of AI's data challenges requires treating this as fundamentally a business strategy issue rather than purely a technical or legal problem. Organizations that implement systematic solutions based on proven technologies and comprehensive governance frameworks can establish sustainable competitive advantages while supporting continued innovation.
Success requires coordinated implementation of technical infrastructure, governance processes, collaborative frameworks, and continuous monitoring capabilities. This comprehensive approach addresses the fundamental challenges of data provenance while providing practical pathways for implementation across diverse organizational contexts.
The evidence suggests that systematic data governance is not only technically feasible with current technologies but economically advantageous for organizations prioritizing long-term sustainability. By implementing comprehensive solutions now, businesses can establish the foundation for trustworthy AI deployment that supports both competitive advantage and regulatory compliance.
The companies that solve for legal clarity and ethical data practices will ultimately determine AI's commercial trajectory. One might claim: Investing in systematic data governance now or inherit tomorrow's liability. In an environment where copyright uncertainty has become AI's defining business risk, comprehensive AI data solutions might be needed for long-term adoption.
America’s New AI Gold Rush
The United States is pouring record-breaking sums into artificial intelligence, rewriting the playbook for both venture capital and investment management. That cash surge is colliding with Wall Street’s need for speed, pushing asset managers, hedge funds and robo-advisers to embed large-language models (LLMs) in every workflow. From Silicon Valley to lower Manhattan, AI is no longer a pilot project: it is the operating system of modern finance.
US AI Investment: Global Private Investment by Country (2024)
Funding Firehose: How the U.S. Pulled Away
Venture investors pumped $97 billion into American AI start-ups in 2024, nearly half of all U.S. startup capital[2]. Momentum accelerated this year: first-half funding jumped another 75.6 percent to $162.8 billion, the strongest tally since the 2021 boom[3]. Mega-rounds are driving the spike. OpenAI’s $40 billion raise at a $300 billion valuation set a global record[4][5].
Why the deluge? Three forces converge:
Big-tech balance sheets. Microsoft, Amazon and Google are financing entire rounds to secure model access[6].
Government tailwinds. The CHIPS and Science Act offers $52.7 billion in incentives for domestic chip fabrication plus $200 billion for AI-related R&D, crowding-in private capital[7][8].
Clear regulatory pathways. Compared with Europe’s cautious AI Act, the U.S. maintains lighter-touch oversight, encouraging faster product launches[9].
Investment in generative AI startups by big tech and venture capital from 2018 to 2023, showing a record high in 2023 driven by major tech companies.
Wall Street Goes West
Silicon Valley’s AI builders have become indispensable to financiers hunting for alpha. Corporate investors now back 71 percent of top U.S. AI start-ups, and financial firms are among the most active limited partners[10]. Meanwhile, multi-strategy hedge funds are opening Palo Alto engineering hubs to lure LLM talent, a trend that helped AI-heavy vehicles outperform peers by 3–5 percentage points annually since ChatGPT’s debut[11].
Data advantage. AI-powered funds process unstructured inputs - satellite images, ESG filings, TikTok hashtags - in milliseconds, spotting anomalies long before human analysts[12].
Speed dividend. Machine-learning signals cut due-diligence timelines by up to 40 percent, letting private-equity teams pre-empt rivals in auctions[13].
Cost cushion. A single LLM agent now produces first-draft research memos in seconds; firms that deploy the tech report 15–20 percent productivity gains[14].
Inside the Asset-Management AI Makeover
A quiet revolution is unfolding on trading desks and in back-office cubicles: 78 percent of U.S. asset managers have already embedded AI somewhere in the investment chain, and adoption is racing toward 90 percent[15]. Market size tells the story. The AI-in-asset-management sector stood at $5.4 billion this year and will rocket to $21.7 billion by 2034—a 24 percent annual clip[16].
Key use-cases are crystallizing:
“We walled off generative AI at first because usage exploded so quickly” recalls Roger Freeman at the “Beryl Elites” Conference, Co-Head of Data Science at Neuberger Berman. “Now an internal GPT writes code, manipulates Excel models and drafts slides - then another model audits the output.”[20]
AI applications in financial services include machine learning for fraud detection and recommendations, natural language processing for robo-advice and chatbots, and time series analysis for algorithmic trading.
Public Money, Private Ambition: CHIPS & Stargate
Washington is not sitting idle. The CHIPS and Science Act unlocked $280 billion for semiconductors and AI research, catalyzing dozens of fabs and test-beds across Texas, Arizona and Ohio[7][21]. Federal AI spending hit $3.3 billion in 2022 and is climbing.
The headline-grabbing move, however, is the Stargate Initiative - a $500 billion private-sector plan backed by OpenAI, SoftBank and Oracle to erect the world’s largest AI data-center campus[22][23]. Phase One’s $100 billion outlay will add more than a million square feet of compute in Abilene, Texas. If completed, Stargate’s power draw would eclipse that of many U.S. states, cementing domestic compute supremacy and creating 100,000 jobs[24].
Talent Gold Mine
AI’s march is reshaping the labor market. There were 35,445 open AI roles in Q1 2025, up 25 percent year-on-year, with median salaries at $157,000[25]. LinkedIn ranks “AI engineer” and “AI consultant” as the two fastest-growing U.S. jobs[26]. Boomers still hold the plurality of AI posts, but millennials dominate software-developer ranks, 2.07 million strong, while Gen Z grids into junior machine-learning roles[27].
Forrester expects cognitive technologies to displace 7 percent of U.S. jobs by 2025 yet create 8.9 million new positions in robot monitoring, data science and automation oversight[28]. The skills shortage is acute: 93 percent of automation technologists feel only partly prepared for smart-machine deployment, a gap universities and online boot camps are racing to fill[28].
Conclusion
At the 6th Annual Beryl Elites conference, leading voices agreed on a central point: while algorithms ingest and analyze unstructured data at unprecedented volumes, the ability to extract true, actionable insight, and to navigate the nuances of rapidly changing data demands human curiosity, adaptability, and experience. AI is augmenting, not replacing human judgment
Dr. Richard Peterson, CEO of MarketPsych carefully noted:
“AI will definitely be used in the financial world… there are 100,000 to 200,000 human analysts that might be a little concerned about their jobs going forward because of the ability to synthesize so much information so quickly.”
“In the next decade, AI will transform how we understand ourselves and many of the systems we live in. The creativity of analysts is going to become much more valuable - we have to be creative, we have to be thoughtful and curious.”
For investors, the takeaway is blunt: integrate AI or surrender alpha. For policymakers, the challenge is to expand compute, energy and talent pipelines without igniting systemic risk. Either way, the story of U.S. finance in the 2020s will be written in code - and the next chapter is already compiling.
Manager Due Diligence for DigitalAsset Hedge Funds
Digital assets are no longer a fringe curiosity. According to PwC’s 3rd Annual Global Crypto Hedge Fund Report (2021), 21% of traditional hedge funds now invest in crypto, averaging a 3% allocation. And by 2023, 80 % of crypto hedge funds and traditional hedge funds were using thirdparty custodians as their primary custody solution, and most funds now employ multiple types of custody. These numbers point to a maturing ecosystem where investment and operational discipline are prerequisites for institutional adoption.
Part 1. Investment Due Diligence (IDD) Management
As allocations grow, so too does the need for rigorous investment due diligence (IDD), particularly given the unique legal, valuation, and liquidity challenges that set digital assets apart from traditional asset classes. Following is a 5-step process for IDD:
Clarifying the Investment Mandate
IDD starts with understanding the scope of the fund’s investment universe. Crypto exposure can take many forms from liquid tokens like BTC (Bitcoin) or ETH (Ethereum), DeFi protocols, NFTs (Non-fungible Tokens), staking, or venture-style investments in Web3 infrastructure. Allocators must confirm several points before acting:
Whether crypto is explicitly permitted in governing documents
How large exposures can become in volatile markets
How the manager handles illiquid or speculative crypto instruments like ICOs (Initial Coin Offerings) or SAFTs (Simple Agreement for Future Tokens)
Per SBAI guidelines, even traditional hedge funds should be asked about implicit crypto exposure in broader macro or currency strategies.
Strategy and Risk Alignment
As the Dilendorf Law Firm notes, strategy selection directly impacts legal classification. Investment due diligence should assess how the manager interprets and mitigates legal risk, particularly in strategies involving DeFi arbitrage, NFT flipping, or pre-token venture investments, and whether that aligns with investor risk tolerance.
Valuation Consistency in a 24/7 Market
Crypto markets operate 24/7, meaning valuation policies must account for continuous trading. Thus, a manager’s valuation policy must be:
Clearly documented with a specific time of day and exchange methodology
Based on multiple exchanges to reduce reliance on anomalous or manipulated prices
Validated with back-testing and reconciliation procedures, especially for thinly traded tokens
As SBAI notes, investors should request audit trails or timestamps in the NAV calculation process, especially during volatile periods.
Transparency and Asset Verification
Blockchain transparency allows for near-real-time position verification if wallet addresses are provided. Fund administrators or auditors should be able to independently verify the existence of the asset, the chain of custody, and the transaction history.
If a fund chooses to practice self-custody, they must demonstrate sole control over private keys, often using encrypted blockchain messages to prove ownership.
Navigating Regulatory Grey Zones
According to PwC, 82% of crypto hedge fund managers cite regulatory uncertainty as the top obstacle to expansion. As a result, IDD teams should assess:
The manager’s awareness of potential reclassification of tokens as securities (e.g., the SEC vs. Ripple case).
Response preparedness to delistings or enforcement actions.
Jurisdictional alignment, particularly when using platforms subject to sanctions or legal scrutiny (e.g., Tornado Cash, FTX fallout).
Conclusion
IDD for digital asset managers goes far beyond traditional portfolio evaluation. It requires a cross-disciplinary lens in understanding blockchain infrastructure, regulatory evolution, and technological nuances. In a space where innovation can outpace oversight, effective investment due diligence ensures that institutional capital is aligned with opportunity and long-term resilience.
Part 2. Operational Due Diligence (ODD) Management
While volatility and high returns dominate headlines, it is often operational discipline that tells the true story about fund endurance.
According to the Standards Board for Alternative Investments (SBAI), operational due diligence (ODD) for digital asset managers must directly address risks unique to the crypto ecosystem particularly in relation to custody, trade execution, valuation, and conflicts of interest.
Custody and Asset Control
Custody remains one of the most complex and consequential risks in the crypto space. A single point of failure, such as a lost private key or compromised wallet, can erase client assets permanently.
According to PwC’s 2023 Global Crypto Hedge Fund Report:
80% of crypto hedge funds use third-party custodians
But 20% still practice self-custody, heightening operational risk
Only 59% use multi-signature wallets, leaving many funds exposed
When addressing these points, many questions are necessary for ODD teams to consider such as: How are private keys stored and secured? Who has access to custody systems, and how is that access logged and controlled? Are custodial solutions insured or subject to regular penetration testing?
Funds that rely on self-custody or ad hoc controls must demonstrate a higher burden of operational rigor to win allocator trust.
Trade Execution and Counterparty Risk
Crypto’s fragmented market structure across centralized exchanges, DEXs, and OTC desks makes execution quality and counterparty reliability critical.
As a result, ODD teams should examine:
Is there a clear policy for selecting and monitoring counterparties?
Are execution platforms vetted for regulatory risk, cybersecurity, and solvency?
Are transaction records reconciled against blockchain data?
A fund's execution strategy should reflect not just market efficiency but also operational feasibility and compliance risk.
Valuation Oversight in Real-Time Markets
With crypto trading 24/7 across multiple channels, valuation policies must account for real-time volatility and illiquid tokens.
A strong valuation framework should include the consideration of time-stamped pricing sources and exchange hierarchy for NAV calculations. In addition to this, policies for handling thin liquidity and high bid-ask spreads or the usage of third-party valuation agents for complex or bespoke assets are also necessary.
Inadequate valuation practices can misstate AUM, trigger NAV errors, and mislead investors especially when tokens are locked, illiquid, or delisted.
Segregation of Duties and Internal Controls
Another point to consider is the separation of responsibilities which has played a role in nearly every major crypto fund failure. Whether it is one person controlling the wallets or blurred lines between trading and compliance, lack of segregation creates unchecked risk.
ODD teams should confirm:
That fund personnel roles are clearly separated, even in small teams
That dual control or multi-sig processes are required for all fund transfers
That a third-party fund administrator is in place and reconciles independently
Per PwC, there has been a positive trend in the utilization of external fund administrators in which 91% of crypto hedge funds use a third-party. Yet 9% still operate without one, raising concerns about internal controls and NAV integrity.
Business Continuity and Incident Response
Beyond traditional IT disruptions, ODD teams must also take into consideration exchange bankruptcies, smart contract exploits, or wallet freezes that may affect the portfolio of their clients.
ODD teams should plan ahead to mitigate risks through:
Crisis protocols for wallet breaches, exchange outages, or regulatory freezes
Succession planning for key personnel and decision-makers
Regular incident response testing and insurance coverage for operational disruptions
Given how quickly crises unfold in crypto, real-time readiness is essential.
Conclusion
Operational due diligence in cryptocurrencies is the foundation of investor protection. Even as investment strategies become more sophisticated, operational infrastructure must evolve in tandem.
The past failures of firms like FTX and Three Arrows Capital were not just financial, they had operational underpinnings. As the role of digital asset managers continue to evolve, a fund’s long-term operational stability will be a core indicator of its success.
Unlocking Institutional Capital: The Rise of Tokenization of Real‑World Assets
Ray Ma
Abstract
Tokenization transforms off-chain assets—such as U.S. Treasuries, private credit, and real estate—into blockchain-based tokens that retain legal ties to the underlying collateral while gaining programmability, 24/7 transferability, and automated compliance features of digital assets. The total value locked (TVL) in publicly visible tokenized real-world assets (RWAs) surged from approximately $0.5 billion in 2020 to $24.8 billion as of July 2025, reflecting a compound annual growth rate (CAGR) exceeding 170%. As blockchain infrastructure matures and regulations offer clearer frameworks, tokenized assets are increasingly positioned to become a strategic component of institutional investment portfolios. This paper adopts an investment research perspective—mapping the market potential, unpacking the economic rationale behind adoption trends, and assessing the risk-return dynamic for institutional investors.
1. Definitions & Introduction of Related Topics
1.1 Real‑World Assets (RWAs)
Real-world assets are financial instruments that originate in the traditional economy and hold intrinsic or contractual value. For instance, bonds, equities, real estate, commodities, structured credit, and trade receivables. Traditionally, these assets are managed through off-chain legal systems, where transactions depend on intermediaries for settlement, custody, auditing, and fractionalization. This setup often leads to delays, inefficiencies, and additional costs, like multi-day settlement periods and fragmented ownership structures.
1.2 Blockchain & Tokenization
Blockchain is a digitally distributed ledger system that can be either open (permissionless) or restricted (permissioned). It stores data securely using cryptographic methods and distributes that data across a network of nodes. And by making transaction records securely accessible for all participants, blockchain facilitates near-instant settlement, improves transparency, and enables automation through smart contracts.
Tokenization refers to the issuance of blockchain‑based tokens that legally represent ownership or economic rights to RWAs. Common standards include ERC‑20 for fungible securities, ERC‑721 for non‑fungibles, and ERC‑3643/1400 for compliance‑aware security tokens.
1.3 Tokenized real-world asset (RWA) market
Tokenization serves as a bridge between traditional finance (TradFi) and decentralized finance (DeFi), embedding real-world collateral into blockchain-based tokens. Leading financial institutions, including BlackRock (BUIDL), Franklin Templeton (BENJI), and J.P. Morgan (Project Guardian), have launched pilot initiatives, signaling that regulated capital is entering the tokenized asset space at scale.
2. Market Overview & Opportunities
The tokenized real-world asset market has evolved from its initial phase into early-stage expansion, mainly driven by growing capital deployment and product innovation.
2.1 Historical Expansion
Figure 1 visualizes the exponential rise of TVL in tokenized RWAs, from$0.5 billion in 2020 to $24.8 billion by July 2025. Notable inflection points include: (1) in 2022, Franklin Templeton’s on-chain money market fund surpassed $800 million in AUM (asset under management); and (2) in 2024, BlackRock’s BUIDL fund exceeded $1 billion in assets within twelve months of its launch.
2.2 Current Market Composition
The asset-class distribution of tokenized real-world assets is heavily skewed toward income-generating debt instruments. Private credit dominates the market with approximately $14 billion, accounting for around 56% of the total. This category includes short-duration lending pools, structured notes, and invoice-factoring receivables. U.S. Treasury bills (T-bills) follow, making up around $7.4 billion, 30% of the total, primarily through tokenized T-bill wrappers such as OUSG, BENJI, and BUIDL. And the remaining $3.4 billion, roughly 14%, is distributed among other asset types, including real estate, commodities, and hybrid instruments.
2.3 Market Prospective in 2030
To further illustrate the potential upside, McKinsey provides a compelling but substantiated view of where tokenized real-world assets (RWAs) are heading. In their analysis, the total market cap could grow to around $2 trillion by 2030 in a conservative case, and up to $4 trillion in a bull case—a huge jump from today’s $24 billion market. This implies an average CAGR of ~75% ,reflecting the real momentum already building across asset classes like mutual funds, bonds and securitizations. These are areas where blockchain can make a meaningful difference by increasing transparency and efficiency. Compared to some of the more bullish projections out there (e.g. BCG’s projection of 16 trillion market cap in 2030), McKinsey’s projection is more cautious but realistic, with its in-depth understanding of current adoption barriers. Still, even at this pace, the upside is enormous. For institutions and investors who move early, the opportunity isn’t just about an innovative play—it’s chance to shape and participate in what may become a foundational layer of future financial system.
3. Pros — Strategic Advantages of RWA Tokenization
Tokenization of real-world assets introduces a set of transformative benefits that are reshaping traditional finance infrastructure. Institutional investors and asset managers are particularly drawn to tokenization for its ability to improve liquidity, reduce frictional costs, enhance legal compliance, and support product innovation. Below are the main advantages, emphasized with financial and operational context. Taken together, these benefits underscore why tokenized RWAs are likely to play a foundational role in the future of capital markets.
3.1 Improved Liquidity and Fractional Ownership
Tokenization allows for fractional ownership of traditionally illiquid assets like commercial real estate, private credit, and sovereign debt. Through smart contracts and automated market makers (AMMs), tokenized assets can be traded 24/7, eliminating the T+2 settlement lag in the traditional market. Moreover, emerging tokenization platforms now offer fractional exposure with lower capital thresholds, allowing both institutions and retail investors to participate in income-generating assets. For instance, platforms like Maple Finance (private credit) and RealT (real estate) allow investors to hold fractional interests in income-generating assets at significantly lower entry points than traditional offerings. Maple brings minimum private credit allocations down from millions to $100,000 for institutions, and RealT opens real estate investment to individuals starting from just $50. As a result, the combination of rapid settlement and fractional ownership significantly improves market accessibility, enhances liquidity, and fosters a more dynamic secondary trading environment. Over time, this democratization of access has the potential to deepen global capital markets and reduce the liquidity premium historically demanded by investors.
3.2 Cost Efficiency and Automated Compliance
The traditional issuance and transfer of real-world assets typically involve a complex chain of intermediaries, such as custodians and brokers, that introduces additional fees, delays, and operational friction. However, tokenization can streamline this process by using smart contracts that execute functions automatically. For instance, in traditional finance, investors use third-party verifications, such as paying a bank to verify a transaction. But these associated costs can be avoided in the blockchain model, further reducing the transaction costs. Therefore, tokenization reduces operating costs across the asset lifecycle with less human intervention and reconciliation.
Beyond cost efficiency, tokenization also introduces a shift in regulatory enforcement through its embedded compliance design. Token standards such as ERC-3643 and ERC-1400 incorporate KYC/AML checks, whitelist access control, and transfer restrictions directly into the token logic itself, allowing for automated enforcement of regulatory rules. This compliance-by-design approach offers a scalable model for cross-border distribution and could accelerate global access to private market assets.
3.3 Auditability and Custody Infrastructure
A key advantage of tokenized assets is real time verification. Tools like Chainlink’s Proof-of-Reserve allow anyone to check whether a token is backed by off-chain assets instantly. Such major improvement over traditional models that typically rely on periodic audits and internal controls. By enabling real-time verification, on-chain attestation strengthens investor confidence and minimizes the risk of hidden liabilities.
At the same time, the infrastructure for secure custody has matured significantly. Leading providers like Anchorage Digital and Fireblocks now offer institutional-grade wallets that combine advanced security features like multi-party computation (MPC), offline key storage, and permissioned access. These solutions ensures that tokenized assets can be stored and transferred safely, without compromising on compliance or control. As these custody solutions gain regulatory approvals and institutional adoption, they are likely to further legitimize tokenized assets as viable alternatives to traditional financial instruments.
3.4 Programmable RWAs and DeFi Instruments
The programmability of tokenized real-world assets unlocks a new frontier of financial engineering. By issuing RWAs on-chain, they can be seamlessly embedded into structured products without relying on traditional intermediaries. For instance, Ondo Finance restructures tokenized U.S. Treasuries into senior and junior tranches, enabling differentiated exposure for risk-averse and risk-seeking investors. These programmable assets can further be integrated into decentralized derivatives markets, such as perpetual futures via Synthetix, wrapped into tokenized ETFs, or utilized within on-chain options protocols. This modularity paves the way for innovative portfolio construction, combining traditional risk-return frameworks with the composability and automation of decentralized finance (DeFi).
3.5 Global Accessibility
Tokenization expands global access to real-world assets by removing traditional barriers imposed by geography, market structure and regulations. In traditional finance, access to assets such as private credit, or commercial real estate is often limited to institutional investors within specific jurisdictions, constrained by currency controls, onboarding complexity, and cross-border regulatory compliance. However, tokenized assets, with its embedded compliance logic, enables programmable controls that ensure regulatory alignment while giving access to qualified investors worldwide. Moreover, as blockchain infrastructure supports 24/7 trading and settlement, it largely eliminates the reliance on local market hours and enables across-region capital markets participation. As a result, tokenization leads to a more inclusive and global investment landscape, aligning capital formation with the decentralized nature of modern finance.
In summary, these strategic advantages reflect a strong foundational case for tokenized RWAs as a core building block in next-generation financial systems. No longer a niche innovation, tokenization is steadily emerging as a structurally significant development in the modernization of global capital markets.
4. Cons — Structural Headwinds & Risk Factors
While RWA tokenization promises cost savings, fractional access, and programmable compliance, these advantages must be weighed against practical illiquidity concerns, higher technology spend, throughput bottlenecks and heightened illegal financial activities. Asset managers should treat tokenized RWAs much like emerging-market structured products: attractive yields may compensate for risk, but only after rigorous testing and consideration of energy profile, finality guarantees, and regulatory perimeter. The following headwinds must be underwritten and actively managed by sophisticated investors:
4.1 Practical Illiquidity and Technical Standardization
While tokenization is often praised for its potential to boost liquidity, the reality today is quite different. Most tokenized real-world assets are practically illiquid today, with limited secondary markets, low volumes, and few active buyers. Many of these tokens are only transferable within restricted ecosystems, making it tough to find counterparties and achieve fair pricing. The biggest reason for current illiquid status is the lack of technical standardization across blockchains and smart contracts. Fragmented token formats (ERC20, ERC3643, ERC1400) vary in how they encode ownership rights, compliance logic, and asset metadata, hindering interoperability and increase operational risk. Just as the importance of SWIFT standard in traditional finance, establishing unified token standards is critical to unlocking the full potential of tokenized financial ecosystem.
4.2 Technology Cost
Though RWA tokenization saves investors money on transaction fees, the technology of blockchain is far from free. Many public blockchains, especially proof-of-work (PoW) networks, consume much more electricity than traditional finance system. Bitcoin clearly exemplifies this drawback: According to Cambridge Centre for Alternative Finance, the energy consumed by the millions of devices maintaining the Bitcoin network exceeds that of entire nations; and as of recent estimates, Bitcoin's annual electricity usage surpasses that of Pakistan, a country of over 240 million people. Although Ethereum’s 2022 transition to proof-of-stake (PoS) reduced its energy consumption by approximately 99.5%, a significant portion of RWA tokenization pilots still rely on energy-intensive chains, such as Bitcoin sidechains and certain Layer 1 protocols. As a result, ESG mandates are increasingly pushing asset managers to disclose the carbon intensity of blockchain-based investments, where excessive on-chain energy use can raise the cost of capital and jeopardize eligibility for sustainability-concerned portfolios.
4.3 Throughput Bottleneck and Latency Inefficiency
Public blockchain models are constrained by limited transaction throughputs. Consider Ethereum, currently the most widely used smart contract platform for RWA tokenization, as an example. Ethereum has a base-layer throughput of about 15 transactions per second (TPS) under optimal conditions. But this capacity pales in comparison to traditional financial infrastructure: legacy brand Visa, for instance, is capable of processing up to 65,000 TPS during peak demand. This performance gap stands out in the context of high volume, high frequency asset class like tokenized US Treasuries, leading to transaction backlog, and degraded user experience. Beyond throughput limitations, speed inefficiency further deepens these concerns. Bitcoin’s PoW system, for example, takes approximately 10 minutes to add a single block to the blockchain—an interval that is incompatible with the demands of real-time financial markets. Together, these throughput bottlenecks and latency efficiency raise critical scalability concerns for RWA tokenization on public blockchains.
4.4 Illegal Financial Activities
Tokenization platforms often integrate compliance mechanisms into the token architecture, using standards like ERC-3643 and ERC-1400, which support permissioned transfers, whitelisting, and on-chain identity tagging. These frameworks allow issuers to restrict token transfers only to pre-approved wallet addresses that have undergone KYC/AML verification in theory, creating a permissioned environment. However, in practice, these controls are inconsistently applied, and difficult to coordinate across jurisdictions. There is no universal whitelist, and each issuer typically maintains their own private compliance stack, making cross-platform enforcement fragile.
Furthermore, the integrity of wallet-based whitelisting hinges on the strength of the off-chain KYC process. If a front entity manages to pass identity checks in a lightly regulated jurisdiction, its wallet address becomes functionally indistinguishable from that of a legitimate institution on-chain. This loophole creates opportunities for regulatory arbitrage, enabling criminals to tokenize assets via offshore structures, and then trade them as seemingly compliant tokens within the blockchain ecosystem.
Another example is the dark web where anonymous trading of illegal goods is facilitated through tools like the Tor browser and cryptocurrencies such as Bitcoin. These transactions bypass traditional oversight and violate U.S. regulations. Such regulations mandate the verification of customer identities and screening against watchlists to prevent money laundering, terrorist financing, and other illegal activity.
Although illicit activity accounted for only 0.34% of all cryptocurrency transaction volume in 2023 (Chainalysis, 2024), the potential for abuse remains, especially as digital asset adoption grows. As tokenization of real-world assets (RWAs) scales, regulators and market participants must stay vigilant to ensure that enhanced efficiency does not come at the cost of financial integrity. And RWA tokenization platforms need higher standards and improved cybersecurity to identify, avoid, and resolve security issues, an inevitable path for the maturity of RWA tokenization market.
5. Global Adoption
While U.S. has led much of the RWA tokenization momentum, regulatory developments abroad are equally instrumental in shaping the global tokenization landscape. Notably, the European Union’s Markets in Crypto-Assets (MiCA) regulation offers the first comprehensive framework for digital assets across 27 member states, lowering the barriers for cross-border investment and creating a more scalable environment for institution investors.
In Asia, Hong Kong and Singapore are also positioning themselves as leading crypto finance hub. The Hong Kong Securities and Futures Commission (SFC) is preparing to launch its third issuance of tokenized green bonds, building on the success of two prior offerings in 2023 and 2024. This initiative signals its strategic move to integrate blockchain into its financial infrastructure and to deepen the digital finance ecosystem. As for Singapore, J.P. Morgan and Apollo Global collaborated on a blockchain proof-of-concept (PoC) under the Monetary Authority of Singapore’s (MAS) Project Guardian, a regulatory-led initiative exploring tokenization in finance. The project brings diverse industries together through collaborative pilot programs to examine how tokenization and interoperable networks can help shape the future of financial infrastructure.
6. Conclusion
Tokenized real-world assets (RWAs) are not merely a technological trend but represent a structural evolution in how capital markets could operate in the future. By combining the reliability of traditional financial instruments with the flexibility of blockchain, tokenization enables faster settlement, broader access, and more efficient market infrastructure, exploring value in areas where legacy financial systems fall short of.
However, this promising growth brings new challenges. The absence of interoperable technical standards, energy-efficient infrastructure and regulatory harmonization could leave tokenization fragmented, inefficient, and more exposed to systemic risks. Addressing these structural headwinds is essential: not only to mitigate risk but to build up trust in this new asset class.
Ultimately, the long-term success of tokenized RWAs will depend on effective execution and collaboration. If market participants, regulators, and technologists can co-build secure, compliant, and scalable systems, tokenization has the potential to disruptively reshape global finance. It is no longer a question of whether tokenization will take hold, but how quickly and decisively it will transform capital markets.
References
Beryl Elites. (2024, December). EDU Series What are some of the Hurdles to the Widespread Adoption of Tokenization Use Cases? https://www.youtube.com/watch?v=vzEgKJTXyI0
Beryl Elites. (2024, December). EDU Series What are Public Blockchains and What are Challenges Involved in Establishing Trust? https://www.youtube.com/watch?v=zSMY2SOo6E4
Beryl Elites. (2024, December). EDU Series How Can Tokenization Transform Our Economy and Transition Away from TradFi? https://www.youtube.com/watch?v=JUt3Mk9j_2c
Banerjee, A., Sevillano, J., Higginson, M., Rigo, D., & Spanz, G. (2024). From Ripples to Waves: The Transformational Power of Tokenizing Assets. McKinsey & Company. https://www.mckinsey.com/industries/financial-services/our-insights/tokenization-a-digital-asset-deja-vu
BlackRock. (2025, July). USD Institutional Digital Liquidity Fund (BUIDL) factsheet.
Boston Consulting Group (BCG) & ADDX. (2022). On-chain asset tokenization. https://www.bcg.com/publications/2022/on-chain-asset-tokenization
Cambridge Centre for Alternative Finance. (n.d.). Bitcoin electricity consumption index. https://ccaf.io/cbeci
Chainalysis. (2024). 2024 crypto crime trends: Illicit activity down as scamming and stolen funds fall, but ransomware and darknet markets see growth. https://www.chainalysis.com/blog/crypto-crime-2024/
Chainlink Labs. (2025). Proof-of-reserve technical whitepaper. https://chain.link/whitepapers
DTCC. (2024, January). Building the digital-asset securities ecosystem. https://www.dtcc.com/digital-asset-whitepaper
European Union. (2024, June). Markets in Crypto-Assets (MiCA) Regulation. Official Journal of the European Union.
Franklin Templeton. (2025, April). OnChain U.S. Government Money Fund (BENJI) semiannual report.
Parliament of the United Kingdom. (2025, March). Digital Securities Sandbox Act (DSLA). https://www.parliament.uk/
J.P. Morgan. (2024). Project Guardian: Advancing Institutional-Grade Tokenization through Innovation and Collaboration.
https://www.jpmorgan.com/kinexys/content-hub/project-guardian
From Advisors to Actors: THe rise of Agentic AI in finance
Traditional LLMs: Capable but Reactive
Traditional large language models (LLMs), like GPT-4 and Claude, are pattern recognition systems trained to predict the next most likely word in a sequence based on massive datasets. They are effective at generating human-like text, summarizing information, writing code, and answering questions. However, LLMs are fundamentally reactive tools. They respond to input when prompted but do not have extensive memory, goals, or an ability to act independently. They operate in a stateless fashion, completing one task at a time without retaining much context across sessions, depending on the account type, or adapting their behavior over time. In other words, traditional LLMs are advisors: they offer insights when asked but do not make decisions or act on their own.
Agentic AI: Proactive, Autonomous, and Decisive
The push toward agentic AI, in contrast, builds upon traditional LLMs by adding layers of autonomy, memory, and decision-making. These systems are designed to pursue goals proactively. An agentic AI system may consist of an LLM paired with a planning module, persistent memory, access to tools or APIs, such as trading platforms, smart contracts, or databases, and a feedback loop that allows it to learn directly from its own actions. Rather than waiting for a user to prompt it, an agentic AI LLM can assess a situation, generate a multi-step plan, execute actions, evaluate outcomes, and adjust its strategy accordingly.
Pounds, E. (2024, October 22). What Is Agentic AI? [Image]. NVIDIA Blog. https://blogs.nvidia.com/blog/what-is-agentic-ai/
For example, an agentic AI could autonomously analyze a smart contract, simulate its behavior, detect potential vulnerabilities, and deploy it if deemed safe—without human intervention at any stage.
Efficiency Meets Exposure in Financial Systems
In financial applications, the shift from static AI tools to autonomous, agentic systems opens the door to highly efficient yet deeply complex forms of decision-making. Unlike a traditional LLM that passively explains or forecasts market trends, agentic AI can actively implement trading strategies, rebalance portfolios, audit smart contracts, or even manage decentralized treasuries. This promises immense operational efficiency, but it also introduces new types of systemic risk.
These systems do not just analyze; they act. And when many such agents respond to similar signals or operate under overlapping logic, they can trigger or amplify feedback loops, moving markets in lockstep and destabilizing entire ecosystems.
Historical Parallels: When Machines Move Too Fast
This is not a purely speculative risk. The 2008 financial crisis, while often attributed to human overreach and opaque financial instruments, was also one of the first large-scale demonstrations of machine-driven volatility. Automated systems including credit risk models, algorithmic trading platforms, and portfolio rebalancers began to fail in tandem. As volatility spiked, machines across hedge funds and investment banks automatically unwound positions, draining liquidity and accelerating a self-reinforcing collapse.
MarketWatch. (2013, September 18). From 2008 to now: Charts of the financial crisis [Image]. MarketWatch. https://www.marketwatch.com/story/from-2008-to-now-charts-of-the-financial-crisis-2013-09-18
The 2008 crisis was not just a failure of models; it was a failure of people who trusted those models too much and questioned them too little.
Case Study: Knight Capital Group (2012)
We have continued to see what happens when automated systems act without adequate oversight. In 2012, Knight Capital Group deployed a new high-frequency trading algorithm with a code error that caused it to flood the market with unintended trades. Within 45 minutes, it had lost $440 million—a trade that nearly bankrupted the firm.
The system did exactly what it was designed to do. But without proper testing or built-in kill switches, this incident is a stark reminder that governance failures in new technologies can unravel financial institutions just as quickly as they helped build them. These models, like the agents currently being built, can prioritize enterprise goals but lack an understanding of broader market health, ethical nuance, and accountability.
Case Study: The Flash Crash (2010)
On May 6, 2010, U.S. stock markets experienced a sudden and extreme drop, otherwise also known as the “Flash Crash.” Within minutes, major indices plunged by nearly 10%, temporarily wiping out nearly $1 trillion in market value before quickly recovering. The cause was traced to automated trading algorithms reacting to large sell orders, amplifying volatility without human intervention. The episode revealed just how quickly automated agents can destabilize markets when operating on overlapping signals and uncoordinated logic.
Case Study: Terra-Luna Collapse (2022)
In 2022, the Terra-Luna ecosystem collapsed after the failure of its algorithmic stablecoin, UST. The system was designed to autonomously maintain price stability through automated mint-and-burn mechanisms. When UST lost its peg, smart contracts and bots spiraled into self-reinforcing liquidation cycles, wiping out over $40 billion in value. While not a traditional agentic AI system, this event illustrates how automated, goal-directed systems without proper circuit breakers can implode at massive scale, echoing the same dangers posed by unchecked agentic AI in finance.
Accelerated Adoption, Diminished Oversight
Agentic AI systems raise the same concern—but this time at a much larger scale. As the implementation of agents promises greater efficiency, major banks and hedge funds are quietly reducing headcount, streamlining operations, and reassigning analyst job descriptions. Yet, the accelerated pace of removing human oversight in financial institutions has not been met by equal efforts to regulate the extent of agentic AI reliance.
Their logic may be mathematically sound but contextually blind. In a volatile, interconnected market, even subtle errors in one agent’s decision tree can ripple outward. This is especially dangerous if no human is watching closely enough to intervene.
To avoid repeating the same pattern, the financial industry must resist the temptation to treat automation as an excuse for detachment.
A Call for Human-in-the-Loop Systems
Instead, the industry must build systems that keep humans meaningfully in the loop as essential interpreters. Because in finance, as in all high-risk systems, the delegation of power does not abdicate accountability. When algorithms execute at scale and speed, only well-designed human-machine collaboration can ensure these systems stay safe, fair, and resilient.
Legal and Ethical Vacuum: Who Is Accountable?
Policymakers now face urgent questions:
Can an AI agent be held legally responsible?
Should creators be required to build in explainability or “kill switches?”
If not, how do we uphold transparency, fairness, and trust?
And if an autonomous agent facilitates illicit activity, causes a major trading error, or misprices systemic risk, who is to blame?
The model itself?
The developers who trained it?
The institution that deployed it?
The decentralized autonomous organization (DAO) that funded its operation?
Today’s legislative framework offers few answers. Unlike traditional financial entities, agentic AI systems often lack a clearly defined regulatory subject, making it difficult to apply existing standards around fiduciary duty, liability, and compliance.
References
[1] The New York Times. (2012, August 2). Knight Capital says trading mishap cost it $440 million. DealBook.https://archive.nytimes.com/dealbook.nytimes.com/2012/08/02/knight-capital-says-trading-mishap-cost-it-440-million/
[2] Schaefer, S. (2012, August 2). Knight Capital trading disaster carries $440 million price tag. Forbes. https://www.forbes.com/sites/steveschaefer/2012/08/02/knight-capital-trading-disaster-carries-440-million-price-tag/
[3] Dobbs, O. (2018, June). Arbitrage in the Age of Machine-Made Volatility. The Hedge Fund Journal, Issue 133. https://thehedgefundjournal.com/arbitrage-in-the-age-of-machine-made-volatility/
stableCoins: Use Cases, risks, and opportunities
What are Stablecoins?
Stablecoins are a type of cryptocurrency backed by fiat currencies, such as the U.S. dollar, to maintain stable value. Built on blockchain technology, they enable fast, secure, and programmable transfers of value, making them well-suited for efficient payments.
The Rise and Utility of Stablecoins
Stablecoins have emerged as a key bridge between traditional finance and the rising digital asset space. Today, over $1 trillion in stablecoin transactions occur monthly, with more than 90% being driven by proprietary trading firms, hedge funds, and market makers. These institutions use stablecoins primarily to manage liquidity, rebalance portfolios, and settle trades in decentralized exchanges.
For hedge fund managers, in particular, stablecoins are also changing the operational playbook. Rather than relying exclusively on traditional fiat rails, funds can now transact directly with on-chain financial infrastructure. Stablecoins are not only becoming a new asset class, but are also transforming how capital is moved, allocated, and hedged in an increasingly digital financial ecosystem.
Regulation Versus Growth
Yet stablecoins are not just technical tools, their impact is becoming increasingly central to geopolitical strategy and must be regulated and constructed with caution. In the discussion between whether to prioritize regulation or growth, former SEC Chair Gary Gensler has warned that stablecoins could allow U.S. dollars to move outside the traditional banking system, potentially circumventing sanctions and anti-money laundering laws. There are other skeptics too, including Jamie Dimon, CEO of JPMorgan Chase who declared, “The only true use case for [cryptocurrency] is criminals, drug traffickers…money laundering, [and] tax avoidance.” These statements are backed by a series of graphs from a 2024 Crypto Crime Report, shown below which indicate that stablecoins have risen as the cryptocurrency with the greatest share of illicit transaction volume.
Chainanalysis. (2024). In Crypto Crime Report 2024. Retrieved from https://onchain.org/magazine/stablecoins-in-crypto-crime-a-solution-in-disguise/
These numbers indicate a level of necessary caution when analyzing the implementation of cryptocurrencies like stablecoins in today’s financial market. But they also do not tell the full story. Taking a deeper look into these numbers, for example, the actual percentage share of illicit cryptocurrency activities is quite small—even smaller than the estimated percentage of cash illicitly transacted each year which the UN Office on Drugs and Crime predicts is about 2-5% of the global GDP.
For stablecoin issuers, reports like these can generate opposition and slow down legislative decisions on regulatory guidance. The slowing of progress also raises concerns about stablecoin issuers losing early leadership in the international currency space. In the words of Ethereum co-founder, Joe Lubin, he argues that moving quickly into volatile spaces is key to reinforcing the global influence of the U.S. dollar, particularly as other nations adopt digital assets to stabilize their economies. Lubin’s position is backed by venture capital investors, a16z, who published a report on the state of crypto pointing out that, “One of the tailwinds, at least in the U.S., is the realization that stablecoins can fortify the U.S. dollar’s position abroad even as the dollar’s global reserve currency status slips.” As shown below, compared to other stores of currency, stablecoins are a growing market that the U.S. already has and can continue to leverage in international markets to extend the dollar’s prevalence.
a16z Crypto. (2024). In State of Crypto Report 2024: New data on swing states, stablecoins, AI, builder energy, and more. Retrieved from https://a16zcrypto.com/posts/article/state-of-crypto-report-2024/
Both, Gensler and Lubin have reason to rally for progress. Regulation without room for future progression will result in untapped potential, and future progression without regulation will result in unrestrained financial disorder. Therefore, both speaker’s perspectives highlight a critical tension point: how can the U.S. develop sound regulation without falling behind in a space that is rapidly reshaping global finance?
Regulatory Use Cases and Financial Sector Disruption
One of the key developments demanding a balanced regulatory response is the growing shift away from traditional financial intermediaries toward stablecoin-based alternatives. According to an investment analysis by ARK Investment Management LLC, the rise in stablecoin adoption is placing mounting pressure on banks to reevaluate their business models, particularly regarding transaction fee revenue. With stablecoins enabling direct peer-to-peer transfers via blockchain networks, banks are projected to lose an estimated $143 billion in projected fee income and get removed from the transaction process altogether. While this efficiency benefits consumers by lowering costs—and forms a core part of the stablecoin value proposition—it also introduces an important tradeoff that cannot be ignored. Removing banks as intermediaries eliminates a critical layer of oversight, weakening safeguards around fraud detection, anti-money laundering compliance, and systemic accountability.
ARK Investment Management LLC. (2025). Growth in Stablecoin Transaction Volume Photograph. In Big Ideas 2025. Retrieved from https://research.ark-invest.com/hubfs/1_Download_Files_ARK-Invest/Big_Ideas/ARK%20Invest%20Big%20Ideas%202025.pdf
Since traditional banks and payment processors are subject to strict regulations designed to prevent fraud, manage risk, and ensure financial stability, a sudden switch to digital assets without clear classifications and established use cases could increase vulnerability to financial crimes or systemic shocks. In addition, the congregation of stablecoin infrastructure by a few large tech or cryptocurrency firms will create centers of power and control that, without regulatory standards, can get away with less transparency or accountability than traditional banks. Setting regulatory standards that address these concerns, therefore, is essential to ensure consumer protection and market stability as adoption grows.
Several of these concerns have already begun to be addressed in the EU and the U.S. is now making strides to catch up to their regulatory lead. The EU’s implementation of the Markets in Crypto-Assets (MiCA) framework in 2023, which established comprehensive rules for stablecoins and broader digital asset markets, quickly became a popular point of reference in urging for a similar set of governing infrastructure here in the United States.
As a result, the GENIUS Act, heading for debate on the House floor and recently passed in a bipartisan Senate vote, was born as a step toward addressing these concerns. It would create a federal framework specifically for “payment stablecoins,” allowing both banks and nonbanks to issue them under clear reserve requirements and regulatory oversight. As Dante Disparte from prominent leader in the stablecoin field, Circle, emphasized, the legislation aims to prevent the entry of unregulated or deceptive products, like the now-defunct Terra Luna stablecoin, by enforcing strict standards and criminal penalties for fraud.
From an international standpoint, the GENIUS Act helps protect the U.S. dollar’s role by ensuring that dollar-backed stablecoins are issued under clear, trustworthy regulations. This gives both domestic and global users confidence in using U.S. dollar-denominated stablecoins over foreign alternatives.
Future Navigation of Stablecoin Landscape
As a result, there are several key questions we should continue to consider as financial institutions continue to navigate this landscape. Some of which include: How will regulatory frameworks balance consumer data protection and financial accountability? And in what ways might traditional payment networks and banks evolve or reinvent themselves in response to stablecoin adoption?
Some companies have already made moves like financial giant, Visa, who gave investors and onlookers an insight into how companies may be able to innovate to reduce their risk of displacement. Visa’s endorsement of the GENIUS Act, for one, highlights the company’s strategic plan to evolve alongside stablecoin regulations. By describing the legislation as needed to bring “fit-for-purpose regulatory clarity,” Visa signals its support for a framework that legitimizes and structures the use of stablecoins within the broader financial system. Rather than resisting these changes, Visa appears to be actively positioning itself as a bridge between traditional finance and emerging blockchain technologies. It has declared initiatives such as integrating Visa credentials and tokens with stablecoin platforms, enabling native stablecoin settlement and supporting programmable money to demonstrate an effort to remain relevant as payment models shift. By offering tools that connect fiat and digital currencies, and by supporting cross-border stable coin flows, Visa is taking the driver’s seat in embracing this trend.
Overall, the stablecoin industry remains in a formative stage. To ensure the long-term sustainability of this field, both legislators and fintech leaders must contribute to discussion about thoughtful regulation and showcase a willingness to adapt.
References
[1] Valente, L. (2025, June 5). Stablecoins could become one of the US government’s most resilient financial allies. ARK Invest. Retrieved from https://www.ark-invest.com/articles/analyst-research/stablecoins-as-a-us-financial-ally
[2] Nagle, M. (Photographer). (2025). [Photograph of Circle IPO]. Bloomberg via Getty Images. Published in Kacik, A. (2025, June 5). Circle Internet Group stock remains in high demand after IPO. Investopedia. Retrieved from https://www.investopedia.com/circle-internet-group-stock-remains-in-high-demand-after-ipo-11750466
[3] Heeb, G., Andriotis, A., & Dawsey, J. (2025, June 13). Walmart and Amazon are exploring issuing their own stablecoins. The Wall Street Journal. Retrieved from https://www.wsj.com/finance/banking/walmart-amazon-stablecoin-07de2fdd
[4] ARK Investment Management LLC. (2025). Big Ideas 2025: Unlocking exponential growth through disruptive innovation. Retrieved from https://research.ark-invest.com/hubfs/1_Download_Files_ARK-Invest/Big-Ideas/ARK%20Invest%20Big%20Ideas%202025.pdf
[5] Ramkumar, A. (2025, June 17). Senate passes stablecoin bill in big win for crypto industry. The Wall Street Journal. Retrieved from https://www.wsj.com/finance/currencies/senate-passes-stablecoin-bill-in-big-win-for-crypto-industry-44ee8aeb
[6] European Securities and Markets Authority. (n.d.). Markets in Crypto-Assets Regulation (MiCA). Retrieved June 26, 2025, from https://www.esma.europa.eu/esmas-activities/digital-finance-and-innovation/markets-crypto-assets-regulation-mica
[7] Forestell, J. (2025, April 30). Visa’s role in stablecoins. Visa Perspectives. Retrieved from https://corporate.visa.com/en/sites/visa-perspectives/innovation/visas-role-in-stablecoins.html
[8] Demos, T. (2025, June 24). How Visa and Mastercard can survive the stablecoin threat. The Wall Street Journal. https://www.wsj.com/finance/banking/visa-mastercard-stablecoin-crypto-21e37f84
[9] Matsuoka, D., Hackett, R., & Lazzarin, E. (2024, October 16). State of Crypto Report 2024: New data on swing states, stablecoins, AI, builder energy, and more. a16z Crypto. Retrieved from https://a16zcrypto.com/posts/article/state-of-crypto-report-2024/
[10] European Commission. (n.d.). Money laundering. Retrieved June 26, 2025, from https://home-affairs.ec.europa.eu/policies/internal-security/organised-crime-and-human-trafficking/money-laundering_en
[11] Onchain. (2024, December 27). Stablecoins in crypto crime: A solution in disguise? Onchain Magazine. Retrieved from https://onchain.org/magazine/stablecoins-in-crypto-crime-a-solution-in-disguise/
Tether Limited (USDT) vs. Circle (USDC)
Currently, Tether and USDC are two of the most prominent stablecoins in the current cryptocurrency system, with the former comprising 70% of the total market share. Both stablecoins are theoretically backed by the U.S. dollar and aim to provide price stability while enabling fast, digital transactions. However, they differ significantly in how they are issued, managed, and regulated.
Tether is issued by Tether Limited, a company affiliated with the exchange Bitfinex. It operates across multiple blockchains and is widely used on global exchanges. USDT claims to be fully backed by reserves, which have historically included a mix of assets such as commercial paper and loans, not solely cash. As a result, this has raised concerns about the level of transparency and the stability of Tether’s backing.
stableCoins: MORE TO EXPLORE
USDC, in contrast, is issued by Circle, a U.S.-based fintech firm that recently made its IPO. Circle is fully backed by cash and short-term U.S. Treasuries and undergoes regular attestation reports conducted by independent auditors. The company, additionally, has placed emphasis on regulatory compliance and transparency, making USDC more commonly used by institutions such as U.S.-regulated platforms and DeFi protocols.
Together, the two stablecoins serve different market needs. USDT prioritizes liquidity and global reach, while USDC focuses on trust and regulatory alignment.
Corporate Interest: Amazon, Walmart, and Stablecoins
Large-scale retail companies like Amazon and Walmart are well-positioned to explore the use of stablecoins due to their technological infrastructure and extensive global supply chains. Additionally, these firms already manage large-scale payment systems, vendor relationships, and e-commerce platforms, all of which can be made more efficient with the help of stablecoins. Integrating stablecoins, therefore, would allow them to process faster by bypassing banks and crushing interchange fees through more cost-effective payments, particularly in international contexts.
Other potential benefits of transitioning to stablecoin usage include:
Faster vendor payments: Stablecoins can settle payments instantly, reducing the lag time between international wires or bank transfers.
Lower transaction costs: By reducing or bypassing card networks and bank intermediaries, companies may save on processing fees.
Operational efficiency: Smart contracts could automate back-end processes such as rebates, supply chain payments, and refunds—improving accounting efficiency.
Loyalty and rewards: Companies could experiment with branded stablecoins or token-based reward systems that create deeper customer engagement.
While these companies have not yet formally issued stablecoins, their interest in blockchain-based solutions suggests an openness to rethinking traditional forms of financial infrastructure. The establishment of these networks will lead to faster, cheaper, and more direct financial interactions between buyers and sellers, but it also introduces a new set of challenges. These advancements do not come without bringing disruption to backbone centerpieces of traditional financial structures.
Financial Reinforcement Learning
Algorithmic trading plays an important role in financial markets, accounting for over 60% of the total trading volume in the U.S. equity market. It automates the decision-making process, which requires dynamically deciding when to trade, what assets to trade, at what price, and in what quantity. These decisions must be made in real-time within highly volatile and uncertain market environments.
Conventional machine learning (ML) approaches to algorithmic trading typically rely on a multi-stage pipeline. It involves multiple models, such as alpha models to generate signals, risk models for risk modeling, portfolio construction models to allocate capital, and execution models to perform trading strategies. These models often need to be optimized independently and then manually coordinated, which may ignore the nature of sequential trading decisions and have difficulty adapting to changing market conditions. Reinforcement Learning (RL), specifically Deep RL (DRL), flips the script by training a single model to learn optimal trading strategies through trial and error—just like a human trader, but at machine learning speed and scale. It simplifies the workflow while adapting to dynamic market conditions.
[6] Liu XY, Xia Z, Yang H, et al. Dynamic datasets and market environments for financial reinforcement learning. Machine Learning - Nature. 2024.
Financial reinforcement learning (FinRL) is an interdisciplinary field that applies RL algorithms to financial tasks, such as algorithmic trading, portfolio management, option pricing, hedging, and market making [1-5]. Developed by SecureFinAI Lab at Columbia University, FinRL aims to automate the design pipeline of a DRL trading strategy, allowing for efficiency, customization, and hands-on tutoring. One of the current research focuses on the integration of large language model (LLM)-generated signals from financial documents into FinRL, leveraging structured market data and unstructured textual data. From 2023 to 2025, we have organized three FinRL contests based on FinRL research. These contests cover diverse tasks such as stock trading, crypto trading, and LLM-generated signals for FinRL. The contests attracted around 200 students, researchers, and industry practitioners from over 100 institutions and 22 countries in total.
Why RL is a right language?
DRL is particularly well-suited for financial tasks because it can solve dynamic and sequential decision-making problems, which are very common in finance. For example, trading decisions are made continuously over time. Each trading decision depends on the current market conditions and the trader’s portfolio. These decisions are sequential -- today’s action affects tomorrow’s opportunities, and feedback (e.g., gains or losses) helps refine future decisions.
This figure shows this process in FinRL with an example. The agent represents the trader for decision-making and the environment represents the financial market. The trader (agent) observes the market conditions (state sₜ₋₁), such as its account information, market prices, and financial news. It also perceives the profit earned before, and anticipates the price of AAPL will rise. So, the trader (agent) decides to buy 5 shares of AAPL (action aₜ). After taking this action, the market (environment) will update (state sₜ₋₁), and the trader (agent) gets profits (reward rₜ₋₁ calculated as the change of total asset value). The trial-and-error learning allows the DRL agent to find a strategy that maximizes cumulative rewards over time, adapting to changing market conditions.
More formally, we formulate the trading task into a Markov Decision Process:
State is a snapshot of the current market conditions. This is the information that traders know when making a decision. It typically includes account balance, asset prices, and holding shares, technical indicators (e.g., MACD, RSI), fundamental indicators (e.g., P/E ratio), LLM-generated signals from financial documents (e.g., sentiment signals of news), and so on.
Action is what the agent can do as a trader. It can be buy, sell, and hold; or short and long; or adjust portfolio weights.
Reward is an incentive signal. A common choice is the change in total asset value. It can also be a risk-adjusted return, such as the Sharpe ratio. The agent will learn how to maximize the positive reward and minimize the negative reward.
Through repeated interaction with the market environment, the agent learns a policy (typically a deep neural network) —a probability distribution over actions at state—that aims to maximize cumulative rewards over time.
Financial Data and Market Environment
The financial market data is dynamic, non-stationary, and has a low signal-to-noise ratio. FinRL transforms the raw data into standard market environments, where agents can learn and trade.
The financial data utilized in FinRL include:
OHLCV data. OHLCV (open, high, low, close, volume) data is typical historical volume-price data in finance. This dataset is prepared through yfinance and FinRL-Meta. We also provide data APIs for various financial markets, held on the open-source repository FinRL-Meta.
Limit order book (LOB) for cryptocurrency trading. LOB data offers a detailed view of market depth and liquidity, capturing the behavior of market participants and providing valuable insights into market trends. We use second-level LOB data of Bitcoin for crypto trading.
Financial news. We utilize the Financial News and Stock Price Integration Dataset (FNSPID), which contains 15 million time-aligned financial news articles from 1999 to 2023 for constituent companies in the S&P500. We also provide data APIs to access financial news and documents, held on the open-source repository FinGPT.
FinRL also provides the following features:
Market indicators. The classical market indicators are calculated based on OHLCV data, including Moving Average Convergence Divergence, Bollinger Bands (upper and lower), Relative Strength Index, Commodity Channel Index, Directional Movement Index, Simple Moving Average for 30 days and 60 days, Volatility Index, and Turbulence.
ML-learned factors. We adapted 101 formulaic alphas for LOB data and trained a recurrent neural network (RNN) to extract 8 strong factors. These factors have strong predictive power and capture complex market dynamics.
LLM-generated signals. We utilize LLMs, such as DeepSeek-V3, to analyze financial news and generate actionable trading signals. An LLM assigns a sentiment score of 1 (strongly negative) to 5 (strongly positive) according to the news. It also assigns a risk level of 1(low risk) to 5 (high risk). These signals can then be included in the state and used to adjust the action and reward.
FinRL processes the market data and features into gym-style environments. For example, for trading 30 constituent stocks in Dow Jones Index:
State is a 241-dimensional vector, including account balance, prices of 30 stocks, holding shares of 30 stocks, and 6 technical indicators (MACD, RSI, CCI, ADX, LLM-generated sentiment score, and LLM-generated risk level) for each of the 30 stocks.
Action is a 30-dimensional vector. Each positive (negative) entry represents buying (selling) a certain amount of shares for that stock. The action is adjusted by the LLM-generated sentiment score – the action for a stock is amplified by up to 10% under positive sentiment and dampened by up to 10% under negative sentiment.
Reward is the change in total asset value. To incorporate risk adjustment, LLM-generated risk levels—assigned to each stock based on news content—are aggregated into a weighted average portfolio-level risk factor. This factor is then used to penalize the reward when the overall risk level is high.
The state, action, and reward can be specified for different tasks. For example, the state can include more technical indicators, as stated above. The action can be continuous or discrete decisions. It can also support long-short trading strategies or be defined as portfolio weights in portfolio management tasks. The reward can use the risk-adjusted return Sharpe ratio in addition to the change in total asset value.
We also incorporate near-real market constraints in the environment:
Transaction costs. A cost of 0.1% for each action {buy, sell} is set, accounting for commission and slippage.
Market volatility. The Turbulence Index and VIX are risk indicators. Larger values indicate increased volatility due to investor fear and increased uncertainty, while smaller values indicate increased market stability.
FinRL also provides GPU-optimized massively parallel environments for simulation, which improves the sampling speed and addresses the sampling bottleneck efficiently. The experiment shows that with 2,048 parallel environments, the sampling speed can be improved by 1,650 times.
FinRL Agents: Learning to Trade
FinRL agents typically use a simple Multi-Layer Perceptron (MLP) as the policy network, which is a probability distribution over actions at state. FinRL supports more than 10 popular DRL algorithms, such as Proximal Policy Optimization (PPO), Advantage Actor-Critic (A2C), and Deep Deterministic Policy Gradient (DDPG), held in ElegantRL.
Evaluation and Performance
We evaluate models using well-established quantitative metrics in finance, such as cumulative return, Sharpe ratio, and maximum drawdown. These metrics provide a balanced view of performance and risk.
Here we show the experiment result of the stock trading task for 30 constituent stocks in the Dow Jones index. We use historical daily OHLCV data from 01/01/2021 to 12/01/2023 (734 trading days) and 10 market indicators listed previously. All models are trained, validated, and tested on a rolling-window basis with 30-day training, 5-day validation, and 5-day testing windows. We use PPO, SAC, and DDPG agents. The policy network for each agent consists of an MLP network with two hidden layers, having 64 units and 32 units, respectively. We set a learning rate of 3×10⁻⁴, and batch size of 64. We also used ensemble methods to reduce policy instability. Ensemble 1 consists of 1 PPO, 1 SAC, and 1 DDPG agents; Ensemble 2 consists of 5 for each type; Ensemble 3 consists of 10 agents for each type. We use a weighted average approach to determine the final action. The performance is shown below. The PPO agent achieves the highest cumulative returns of 63.37%, Sharpe ratio of 1.55, and Sortino ratio of 2.44, showing an ability to maintain high returns with controlled volatility and downside risk. Ensemble 1 shows superior performance from Sep 2022 to Oct 2023. It also achieves the smallest maximum drawdown. All individual agents and ensemble models significantly outperform two traditional baselines, DJIA and min-variance strategy, across all metrics.
[7] Holzer N, Wang K, Xiao K, Yanglet XYL. Revisiting Ensemble Methods for Stock Trading and Crypto Trading Tasks at ACM ICAIF FinRL Contest 2023-2024. arXiv:2501.10709. 2025.
Conclusion
FinRL has already shown its potential in a range of applications, such as algorithmic trading, portfolio management, and option pricing. With its ability to incorporate signals from market data and financial news, FinRL allows agents to make more informed decisions.
Looking ahead, FinRL will continue exploring and integrating LLM-generated signals from multimodal financial data, such as SEC filings, earnings conference calls, alternative data, etc. This will build agents that better reflect how professional investors synthesize information.
Keyi Wang, SecureFinAI Lab at Columbia University, and The Beryl Consulting Group collaborated on this report.
References
[1] Liu XY, Yang H, Chen Q, et al. FinRL: A deep reinforcement learning library for automated stock trading in quantitative finance. Deep Reinforcement Leanring Workshop, NeurIPS. 2020.
[2] Liu XY, Yang H, Gao J, Wang CD. FinRL: deep reinforcement learning framework to automate trading in quantitative finance. ACM International Conference on AI in Finance. 2022.
[3] Hambly B, Xu R, Yang H. Recent advances in reinforcement learning in finance. Mathematical Finance. 2023;33(3):437–503.
[4] Sun S, Wang R, An B. Reinforcement learning for quantitative trading. ACM Transactions on Intelligent Systems and Technology. 2023;14(3):1–29
[5] Bai Y, Gao Y, Wan R, Zhang S, Song R. A review of reinforcement learning in financial applications. Annual Review of Statistics and Its Application. 2025;12(1):209–232.
[6] Liu XY, Xia Z, Yang H, et al. Dynamic datasets and market environments for financial reinforcement learning. Machine Learning - Nature. 2024.
[7] Holzer N, Wang K, Xiao K, Yanglet XYL. Revisiting Ensemble Methods for Stock Trading and Crypto Trading Tasks at ACM ICAIF FinRL Contest 2023-2024. arXiv:2501.10709. 2025.
[8] Financial News and Stock Price Integration Dataset (FNSPID)
[9] 101 formulaic alphas
[10] Dong Z, Fan X, Peng Z. FNSPID: A Comprehensive Financial News Dataset in Time Series. In: KDD ’24. 2024:4918–4927.
[11] Kakushadze Z. 101 formulaic alphas. arXiv preprint arXiv:1601.00991. 2016
The Beryl Elites Annual Investment & Innovation Conference: AI in investing
09/08/24
The Beryl Elites Annual Investment & Innovation Conference has featured several panels exploring quant investment strategies and their use of artificial intelligence, including tools like large language models (LLMs) and machine learning (ML). The discussions have covered a wide range of topics and have been highly engaging. Below are some key highlights from these panel discussions.
AI in Investment Management: Distinguishing Substance from Hype
The rapid pace of AI advancements has led many to proclaim it as the next revolution in asset management. AI presents opportunities for discovering new investment strategies and alpha generation, while also enhancing risk management through the improvement of existing quantitative models and the creation of new ones.
However, how can we differentiate between what’s truly actionable and what’s just noise?
The current landscape is, in many ways, extraordinary. Awareness of AI has surged significantly. What was once the domain of niche players and experts has now become mainstream, with tools like OpenAI’s widely-known ChatGPT making headlines.
Institutional investors must cut through the hype to understand AI’s true role in the asset management industry today. It’s crucial to assess whether AI can genuinely add value to client portfolios when applied with scientific rigor and collaboration. The ability to sift through hundreds of earnings reports and scrape vast amounts of online data in minutes doesn’t automatically make the insights extracted valuable. AI is not a magic solution—it's simply another tool in the statistical toolbox.
Key Areas for AI in Finance
AI and LLMs have been utilized in the financial services sector for decades, relying on algorithms to process data, including human language. Today, AI is being applied in areas such as compliance, investment research, electronic trading, trade matching, and liquidity sourcing. It has enhanced investment processes, improved risk management, and driven innovation.
Currently, AI presents three major areas of efficiency in portfolio management: alpha generation, portfolio construction, and trade execution. Beyond these, AI is also proving useful in fields like legal, HR, and research and development.
Multifaceted Benefits
One of the greatest advantages of AI in asset management is its capacity to filter and analyze ever-growing volumes of data, which is critical to the investment process. AI algorithms can efficiently scrape, clean, and process vast amounts of information, including regulatory filings, social media posts, weather data, trading statistics, web traffic analytics, and government economic reports. The main challenge for asset managers is extracting meaningful insights from this data to support their investment objectives.
By harnessing deep learning, natural language processing (NLP), and LLMs, we’ve discovered numerous new opportunities. At Beryl Elites conferences, the panelists discuss how they analyze and process hundreds of datasets daily, dealing with both structured and unstructured data. AI tools, and their ongoing advancements, have enabled them to handle this data more efficiently and extract greater value from it.
Testing and Validating Models
For asset managers, the ability to extract value from AI systems—whether newly developed or existing ones—is crucial to enhancing their capabilities.
The availability of an ever-increasing number of open-source models presents opportunities to discover new ideas and refine strategies. However, a critical part of the process is thoroughly testing these models, which requires both time and expertise.
It's essential to invest time in testing and validating these models to ensure they deliver meaningful results rather than just generating noise. This demands a scientific approach, with a rigorous process to confirm the models provide value. Given the rapid evolution of AI, continuous testing and evaluation are necessary.
This process also requires immense computing power. Once a model is validated, it must be able to handle vast amounts of data. Scale is key—using LLMs efficiently in an investment process demands the ability to operate at scale, which sets today’s AI apart from earlier versions.
Cloud computing plays a vital role in achieving this scalability.
Leveraging the Cloud
Cloud service providers like Microsoft Azure, Amazon Web Services, and Google Cloud Platform offer the infrastructure needed to store and process vast amounts of data. These resources are essential for asset managers looking to develop their own AI models. As data sets grow larger, computing power increases, and models become more complex, partnering with cloud providers is critical. These platforms also stay at the forefront of innovation in the AI space.
A potential challenge is that the widespread availability of AI tools might give the impression that anyone can easily extract value from them. While platforms like OpenAI and cloud services make it seem straightforward, achieving scalable, daily results in a production environment requires far more expertise. The skills needed to turn AI into a powerful tool extend beyond technology alone. If technical teams operate in isolation, their work may not translate into practical, implementable insights for investment. Collaboration between technology and research teams is vital, and these functions must work seamlessly together to maximize performance.
Based on Beryl Elites panel discussions, successfully leveraging AI models and uncovering new data sources requires close cooperation between engineers and alpha researchers. Engineers need to adopt a research-driven approach, and the distinction between technology and research roles is increasingly becoming blurred.
Execution Efficiency and Alpha Generation
AI implementation today revolves around two key elements: processing ever-larger data sets and using that data to generate alpha. This can have significant effects across an investment firm. For instance, AI's ability to sift through terabytes of data in minutes rather than hours can dramatically boost an asset manager’s efficiency, freeing up resources for higher-value tasks.
Take trading as an example. For a multi-asset manager, trading across various asset classes and regions can be costly, eroding potential alpha. AI not only routes orders more efficiently than human traders but also monitors trades, identifying areas for improvement. With the vast data collected on trade execution and market behaviors, there are many subtle differences across these markets and assets. Machine learning has proven to be a valuable tool, speeding up our understanding of these venues and market nuances. This has delivered measurable value to analysts and traders.
A Rigorous Approach
As AI becomes essential for asset managers, institutional investors must determine which firms can truly deliver results. Does the traditional manager selection process need to change? We don’t think so.
Even for managers who are not quantitative, the same questions used for quant managers apply to ensure a robust, scientific, and reproducible process. What is your approach? Have you thoroughly tested it? Do you have traceability and audit capabilities? Can you explain the results of your simulations?
With new AI models being introduced daily, it’s increasingly difficult to turn these tools into actionable insights. To successfully harness AI’s potential, asset managers need solid infrastructure, well-tested and repeatable processes for analyzing data, and a collaborative, cross-functional approach. This is essential for uncovering alpha opportunities from both existing and new data sets.
Healthcare's Next Big Bets: Exploring Cutting-Edge Investment Opportunities
09/08/24
In the fast-paced world of healthcare, where technology and innovation drive the future, investors are constantly looking for the next big opportunity. From revolutionary care models to the transformative power of AI, the landscape is filled with potential. Let’s take a journey through four captivating discussions that uncover some of the most exciting investment prospects in healthcare today.
First up, in Which Clinical Healthcare Providers are the most Attractive for Investment?, we dive into the world of value-based care and its rising stars. This sets the stage for understanding how specific sectors within healthcare are gaining momentum, providing a foundation that naturally leads us to explore broader industry dynamics.
Jonathan Brayman from Blackstone highlights a future where women’s health, particularly fertility care, takes center stage. He points out that this sector is booming, driven by better patient education and the expansion of services into previously underserved rural areas. It's like unlocking a treasure chest of untapped potential.
AI and machine learning are no longer just buzzwords—they’re the secret weapons transforming how we manage health data and get reimbursed. These technologies are becoming the backbone of modern healthcare.
Charles Boorady of Health Catalyst Capital emphasizes the role of endocrinologists, the new rock stars of healthcare, who are now leveraging cutting-edge technology to address everything from brain-gut connections to managing the complex side effects of antidepressants.
Building on this, we move seamlessly into What's the Latest on Value-based Care versus the Fee-for-Service Model in Healthcare?. Here, we see how the trends discussed earlier play out on a larger scale, influencing the entire healthcare system and setting the context for the challenges and opportunities ahead.
The battle between value-based care and the old-school fee-for-service model is like a heavyweight boxing match. Value-based care, championed by the Affordable Care Act, has been gaining ground, but it’s still facing some tough opponents.
Financial pressures are adding to the drama, creating a slow-motion effect in the industry that’s crying out for government intervention. It’s a classic case of David versus Goliath, with technology being the underdog that could change everything.
Michael Ludwig from MTS Health Partners acknowledges that while innovation within the Centers for Medicare & Medicaid Services (CMS) might seem sluggish, the real excitement is in the long-term developments. He suggests that the true impact will become apparent over the next decade.
Then, as we journey further into the financial side of healthcare, we uncover the intriguing world of How Life Settlements as an Asset Class brings Value to a Portfolio?. This area might not be as widely discussed, but it holds a wealth of potential for those willing to explore its complexities.
Hugh Tawney from Riverrock Funds points out that life settlements, often seen as a niche investment, are actually hidden gems in the investment world. While there are inherent risks—such as the possibility of someone outliving their policy—the potential rewards can be substantial for those who understand how to navigate the complexities.
Diversification is the name of the game here, and for those who can navigate the complexities, life settlements offer a unique way to add some serious sparkle to a portfolio.
Finally, we connect all these elements by examining the role of innovation in driving the future of healthcare. In What is the Impact of AI Innovation in Healthcare on Private Equity?, we explore how technological advancements, particularly AI, are not only transforming healthcare delivery but also creating new investment frontiers. This brings us full circle, linking the specific sectors we discussed earlier with the broader, technology-driven changes reshaping the industry.
Charles Boorady of Health Catalyst Capital highlights that small companies are the unsung heroes of AI innovation, bringing about change with an agility that large corporations can only envy. These are the Davids of the tech world, challenging the Goliaths with their cutting-edge solutions.
Michael Ludwig from MTS Health Partners adds that the big players aren’t to be underestimated—they’re actively acquiring these innovative firms to maintain their competitive edge. He emphasizes that achieving the right scale through these acquisitions has the potential to transform the entire healthcare landscape.
In summary, the healthcare sector is undergoing a significant transformation, offering a wealth of investment opportunities across various fronts. From the rise of value-based care and the integration of AI to the unique potential of life settlements and the growing importance of specialized providers, the landscape is dynamic and full of promise. Investors who can navigate these trends and strategically position themselves stand to gain considerably as the industry continues to evolve. By embracing innovation and recognizing the value in both established and emerging areas, they can unlock the full potential of this ever-changing sector.
The AI Revolution in Healthcare: Navigating the Path to Transformation
09/01/24
The healthcare industry is on the brink of transformation driven by artificial intelligence (AI) and strategic mergers and acquisitions (M&A). This analysis explores emerging trends, their potential impact, and the critical success factors that will shape this revolution. Although challenges exist, organizations that successfully address these will be well-positioned to capture significant value in the evolving healthcare landscape.
Key Trends and Implications
1. AI-Driven Innovation Reshaping Healthcare Delivery
AI integration in healthcare is accelerating rapidly. Projections suggest a 37.3% CAGR for the global healthcare AI market, reaching $208.2 billion by 2030. This growth is fueled by advancements in large language models (LLMs), machine learning, natural language processing, and computer vision, which are increasingly applied in diagnostics, treatment planning, and personalized medicine.
Healthcare providers must prioritize AI adoption to stay competitive.
Significant opportunities exist for AI-focused startups and established tech companies entering healthcare.
Regulatory frameworks must evolve to keep pace with technological advancements.
2. Strategic M&A Activity Intensifying
The healthcare M&A landscape is evolving, with a trend towards vertical integration and technology-driven acquisitions. UnitedHealth Group's $13 billion acquisition of Change Healthcare in 2022 exemplifies this trend. AI-driven advancements in drug discovery and development are expected to drive a surge in biotech acquisitions.
Large healthcare organizations must develop robust M&A strategies incorporating AI capabilities.
Smaller companies with strong AI portfolios become attractive acquisition targets.
Effective integration will be critical to realizing value from M&A activities.
3. Shift in Investment Dynamics
The healthcare investment landscape is shifting towards strategic, long-term investments in AI and digital health technologies. While overall digital health funding declined in 2023, investments in AI-focused healthcare companies remained strong. The average deal size in the first half of 2024 is 40% higher than in 2023.
Investors will prioritize companies with clear AI integration strategies and proven ROI.
Healthcare organizations must articulate a compelling AI strategy to attract investment.
The focus on value creation through AI may lead to a more strategic approach to healthcare investments.
Challenges and Critical Success Factors
While the potential of AI in healthcare is immense, several challenges must be addressed for these trends to fully materialize:
1. Regulatory Compliance
The FDA's evolving framework for AI/ML-based Software as a Medical Device (SaMD) suggests a move towards a more adaptive regulatory approach. Organizations must access high-quality data to meet regulatory requirements and ensure compliance.
2. Data Privacy and Security
With the increasing use of AI, robust data governance frameworks are essential. The HHS Office for Civil Rights reported a 93% increase in healthcare data breaches from 2018-2022, highlighting the critical nature of this challenge.
3. Algorithmic Bias and Fairness
A review published in the Journal of Medical Internet Research found that only 15.8% of studies on AI applications in healthcare considered algorithmic fairness. Addressing this is crucial for equitable healthcare delivery.
4. Integration and Implementation
Cognitive biases in medical practice can lead to diagnostic errors and adverse patient outcomes. AI-driven diagnostic methodologies offer significant opportunities to enhance diagnostic accuracy while addressing human factors that contribute to medical errors.
Strategic Imperatives for Healthcare Leaders
To capitalize on the AI revolution in healthcare, leaders should:
Develop a clear AI strategy aligned with business objectives.
Invest in data infrastructure and governance to support AI initiatives.
Foster partnerships with innovative tech-enabled companies and academic institutions.
Prioritize change management and workforce upskilling to facilitate AI adoption.
Engage proactively with regulators to shape the evolving regulatory landscape.
Conclusion
The AI revolution in healthcare presents a transformative opportunity for organizations willing to embrace change and address the associated challenges. By focusing on strategic M&A, targeted investments, and robust implementation strategies, healthcare leaders can position their organizations at the forefront of this revolution, driving improved patient outcomes and operational efficiencies.
As the healthcare landscape evolves, the ability to leverage AI effectively will likely differentiate market leaders from laggards. The time for strategic action is now.
References
[1] Grand View Research. (2023). Artificial Intelligence In Healthcare Market Size, Share & Trends Analysis Report, 2023-2030. https://www.grandviewresearch.com/industry-analysis/artificial-intelligence-ai-healthcare-market
[2] UnitedHealth Group. (2022). UnitedHealth Group Completes Combination with Change Healthcare. https://www.unitedhealthgroup.com/newsroom/2022/2022-10-3-optum-change-healthcare-combination.html
[3] CB Insights. (2024). State Of Healthcare Report: Sector And Investment Trends To Watch. https://www.cbinsights.com/research/report/healthcare-trends-2024/
[4] Healthcare Dive (2024). Digital health funding declines, but check sizes swell: CB Insights https://www.healthcaredive.com/news/digital-health-funding-declines-q2-2024-cb-insights/721826/
[5] U.S. Food and Drug Administration. (2023). Artificial Intelligence and Machine Learning (AI/ML)-Enabled Medical Devices. https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-aiml-enabled-medical-devices
[6] U.S. Department of Health and Human Services Office for Civil Rights. (2024). 2023 Healthcare Data Breach Report. https://www.hhs.gov/about/news/2023/12/06/hhs-announces-next-steps-ongoing-work-enhance-cybersecurity-health-care-public-health-sectors.html
[7] Sarkar R, Martin A, Niel O, Lippi G. Artificial Intelligence in Medicine: Today and Tomorrow. J Med Internet Res. 2023;25
. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10287014/
[8] NIH National Library of Medicine. (2023). AI Adoption in Hospitals: Current State and Future Prospects. https://www.aha.org/center/emerging-issues/market-insights/ai-adoption https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11041415/
Cureus. (2023) Breaking Bias: The Role of Artificial Intelligence in Improving Clinical Decision-Making. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10115193/
AI’s Role in the Future of Financial Forecasting: A Glimpse Into Tomorrow’s Financial World
09/01/24
In the dynamic realm of artificial intelligence (AI), the financial industry rapidly embraces cutting-edge technologies like Generative AI and Large Language Models (LLMs) to revolutionize forecasting accuracy, risk management, and decision-making. As these powerful tools become increasingly integrated into the fabric of financial systems, industry leaders and decision-makers must grasp their distinct roles and their vast potential. The Beryl Elites’ captivating series of videos on AI offers a rich tapestry of insights into how these technologies are reshaping the future of financial forecasting and beyond.
One of the standout videos from Beryl Elites, titled The Distinct Roles of Generative AI & LLM in Yielding Significant Insight in Financial Forecasting, delivers several fascinating analogies that illustrate the process of integrating these advanced technologies into financial practices:
The process of developing LLMs through machine learning is likened to training a dog - both require iterative practice to hone understanding and predict patterns. Just as a well-trained dog anticipates its owner’s commands, LLMs, powered by machine learning, excel in analyzing market trends and delivering precise financial predictions. This analogy underscores the importance of continuous learning and refinement in developing these models, highlighting how they become more accurate and reliable over time.
AI is described as the master chef, machine learning as the cookbook, and data streams as the ingredients. In this culinary metaphor, Generative AI steps in as the innovative sous-chef, crafting novel strategies beyond the established recipes. This vivid comparison perfectly captures the delicate balance between creativity and precision that is paramount in financial forecasting. While Generative AI introduces fresh strategies and perspectives, LLMs ensure the accuracy of predictions by adhering to tried-and-true methods, much like following a trusted recipe. This balance is crucial in the financial world, where innovation must be tempered with caution to avoid unnecessary risks.
Another compelling video, AI Pitfalls and How to Interact with ChatGPT, presents a unique perspective on AI, advocating for a shift in how we perceive these technologies:
AI should be viewed as augmented intelligence rather than artificial intelligence, highlighting the collaborative potential of AI - enhancing, not replacing, human capabilities. This perspective shifts the narrative from one of competition between humans and machines to one of partnership, where AI tools augment human decision-making processes, providing valuable insights that humans might miss. This approach is particularly valuable in complex fields like
finance, where human intuition and experience are irreplaceable, yet can be significantly enhanced by AI’s ability to process vast amounts of data quickly and accurately.
In finance and investment management, this partnership between human intuition and AI’s ability to process massive amounts of data leads to more informed, strategic decisions, particularly in fields that demand deep understanding and ethical judgment. AI’s role is not to take over but to assist, providing a second layer of analysis that can help identify trends, risks, and opportunities that might otherwise go unnoticed.
Risk management, a cornerstone of financial services, is another area where AI’s transformative power shines. This is particularly evident in Beryl Elites’ video, How AI Bolsters Risk Management:
AI’s ability to map and predict risks with astonishing accuracy is revolutionizing lending, trading, and investment strategies. By analyzing patterns and predicting outcomes with high accuracy, AI helps financial institutions make better-informed decisions, reducing the likelihood of costly errors. This capability is especially crucial in today’s volatile markets, where even small miscalculations can lead to significant financial losses.
The "illusion trap" is a cautionary note highlighted in the video, where AI models, if trained on biased or limited data, can lead to dangerously misleading conclusions. To counteract this, AI models must be trained on comprehensive, diverse datasets and rigorously validated against factual information. This ensures that the models are not only accurate but also fair, avoiding the pitfalls of bias that can skew results and lead to poor decision-making.
This approach boosts the reliability of AI predictions and underscores the critical importance of transparency and traceability in AI-driven risk management. By ensuring that every step of the AI process is transparent and traceable, financial institutions can maintain trust with their clients and regulators, demonstrating that their AI-driven decisions are based on solid, verifiable data.
The influence of AI and LLMs extends well beyond the financial sector, as highlighted in the insightful video Which Industries Are Primary Users of LLM by NVIDIA’s Startup Division?:
Healthcare: NVIDIA's technology is transforming the medical field, with significant impacts on computer analysis, medical image processing, and the study of fundamental biological structures. These innovations are making a tangible difference in patient care by enabling more accurate diagnoses and personalized treatment plans. Additionally, AI models, including LLMs, are accelerating the development of new therapies by refining medical language models for greater precision.
Media: In the media industry, NVIDIA's LLMs are revolutionizing content creation and analysis. Their ability to understand and predict language patterns is driving innovation, from generating scripts to tailoring content for diverse audiences and analyzing consumer feedback. This integration of AI is essential for media companies striving to stay ahead in an ever-evolving landscape.
Broader Impact: Beyond healthcare and media, NVIDIA’s technology is opening new possibilities by enabling the integration of smaller participants into larger models, addressing unique challenges from diverse perspectives. The flexibility and explosive potential of these models foster innovation across various industries, empowering organizations to navigate uncertainty by experimenting with multiple models and approaches.
As AI technology continues to evolve, the considerations for its development, particularly in investment management, are paramount. The Beryl Elites video, What Are Considerations for AI Development in Investment Management? emphasizes several key points that organizations must consider:
The necessity of eliminating ambiguity and ensuring transparency in AI models cannot be overstated. As these models become more complex, it is essential that their outputs are clear and understandable to human users. This not only helps in making better decisions but also in building trust with stakeholders who may be wary of the “black box” nature of AI.
Governance and ethics take center stage in this process, with organizations needing to establish clear guidelines for how AI should be used, ensuring that it is applied in ways that are both ethical and effective. This includes eliminating bias, protecting privacy, and ensuring that AI tools are used to benefit all stakeholders, not just a select few.
Along with the need to start with focused, cost-effective use cases that pave the way for innovation. By starting small and scaling gradually, organizations can experiment with AI in a controlled manner, learning from each deployment and refining their strategies before rolling out AI on a larger scale.
By addressing these considerations head-on, organizations can seamlessly integrate AI into their operations while deftly managing the associated risks. This careful approach ensures that AI is not just a tool for innovation, but a catalyst for sustainable growth and long-term success.
In conclusion, AI and its advancements in Generative AI and Large Language Models (LLMs) are increasingly shaping the future of financial forecasting and beyond. As these technologies continue to evolve, their influence will only grow, driving innovation and progress across various industries. However, their successful integration depends on a thoughtful approach - one that carefully balances advanced technological capabilities with human expertise, ethical considerations, and a strong commitment to transparency. By navigating these complexities effectively, the financial industry and other sectors can fully harness the transformative power of AI, paving the way for a future marked by profound and exciting advancements.
Implementing Alternative Data in Investing: Moving Beyond Buzzwords
8/25/24
The world of investing has increasingly turned to alternative data as a means to gain an edge in the market. However, merely invoking terms like "AI," "machine learning," or "big data" is not enough to create value. It’s becoming increasingly important to focus on the essential aspects of implementing alternative data in investing while avoiding the pitfalls of buzzwords.
Understanding and Evaluating Data Quality
When integrating alternative data into investment strategies, it's crucial to begin by understanding what kind of data is being used and how insights are derived. Managers often claim that they use alternative data/big data sources, but this statement alone provides little insight into the actual value being created. As a savvy investor or data scientist, it's essential to dig deeper:
What specific data sources are being used?
How are these insights being extracted and integrated into the decision-making process?
Evaluating the quality of data becomes particularly challenging when working with multiple datasets. Investors must ensure that the data is complete and provides sufficient breadth to cover the investment universe. In the Beryl Elites Spotlight series video, Jess Stauth, the chief investment officer from Fidelity Investments emphasizes the importance of data comparability across multiple companies when analyzing alternative data. She noted, "I do not care anything about one company unless I have enough other companies with the same data set to compare it to". Stauth also highlights the importance of historical data in validating the observed patterns, stating, “you really need to have enough historical data to gain confidence that the patterns or the correlations that we see are statistically significant”.
Integration and Normalization of Multiple Datasets
The integration of alternative data requires normalization across different datasets to ensure comparability. This involves processes like ticker mapping and entity resolution, which are essential for creating a unified view of the data. Proper data governance and connectivity allow for more accurate analysis and decision-making.
However, real-world data is often messy, and a resilient system is needed to handle these challenges. In the Beryl Elites Spotlight series video, Jess Stauth also emphasizes the importance of being prepared for data quality issues: "you have to be resilient to knowing that you will have real-world messy data and build around that". Building robust processes for data quality checks, error handling, and data interpolation is key to ensuring that your models remain reliable and effective.
Avoiding the Trap of AI-Washing
In today's investment landscape, there is a growing trend of "AI-washing," where companies overstate their use of AI and related technologies to appear more innovative. This issue was highlighted in the Investing Pioneers Webinar Series by Angelo Calvello, Co-Founder at Rosetta Analytics, who discussed the pervasiveness of AI-washing in the industry. To combat this, investors and analysts need to be vigilant and ask the right questions:
What specific type of AI is being used?
How is it integrated into the investment process?
Is it truly a game-changer, or is it merely being used for operational efficiency?
Having your team type in ‘import openai’ does not mean that you are at the cutting edge of artificial intelligence. It's essential to look beyond the buzzwords and focus on how these technologies are genuinely adding value to investment strategies. As Tony Berkman, the managing director at Two Sgima pointed out in the Beryl Elites Spotlight series video, "It's really trying to think in a more nuanced way... what are the types of questions you can ask that matter for the future, that you can use the alternative data to really get conviction in a strong investment thesis".
Conclusion
The integration of alternative data into investment strategies offers significant potential for gaining a competitive edge. However, success requires more than just throwing around buzzwords like AI and machine learning. It demands a deep understanding of the data, rigorous evaluation of data quality, and effective integration of multiple datasets. By avoiding the pitfalls of AI-washing and focusing on building a robust investment strategy, investors can unlock the true value of alternative data.
Thank you for reading. Please feel free to leave any comments in our sign-up section.
The Beryl Consulting Group Editorial Team
Expert Insights on Hedge Fund Portfolio Construction and Strategy Selection
8/25/24
In a recent panel discussion at the Beryl Elites event, several prominent hedge fund managers and investment officers offered their perspectives on the evolving role of hedge funds in modern portfolios. The panel featured Amit Sahni of New York Life Investments, Mike Weinberg of PGGM, Alisa Melman of East Lane Management, and Liz Hillman of Barlow Partners. Their insights provide a nuanced understanding of how hedge funds can be strategically utilized in today's complex market environment.
The Enduring Value of Hedge Funds
The discussion began with an exploration of the ongoing relevance of hedge funds in investment portfolios. Mike Weinberg, reflecting on the evolving nature of hedge funds, explained, “historically, hedge funds were seen as a distinct asset class, but our view has evolved. We now consider them as integral parts of other asset classes.” Despite this shift, Weinberg emphasized that hedge funds continue to play a critical role by offering “uncorrelated, alpha-driven returns” that are essential for mitigating risk, especially during market downturns. He pointed out that there’s been a period when traditional indices like the S&P 500 were down nearly 40%, hedge funds performed significantly better, demonstrating their value in a diversified portfolio.
Liz Hillman echoed this claim, acknowledging that while the expectations for hedge funds may have tempered since the financial crisis, they remain valuable. “We're all looking to find higher returns, and hedge funds still offer real value within a broader portfolio,” she noted. Hillman also highlighted the historical performance of hedge funds, suggesting that they have a crucial role to play as markets become more challenging.
Selecting and Managing Hedge Fund Strategies
When it comes to selecting hedge fund strategies, the panelists emphasized the importance of diversification. Amit Sahni stressed that no single strategy is likely to outperform consistently in all market conditions. “You should have exposure to a diversified set of alternative strategies,” he suggests, pointing out the risks of over-concentration in strategies like long-biased equity, which might struggle during market downturns.
Alisa Melman added that the best hedge fund managers are those who can generate returns on both the long and short sides of their portfolios. “We look for managers who actually identify shorts as a profit center, not just as a hedging strategy,” she said. Melman also pointed to sectors like life sciences and technology, which are ripe with opportunities due to high levels of disruption and innovation. “There's lots of dispersion, which is what long/short managers love—plenty of opportunities on both the long and short sides.”
Liz Hillman reinforced the importance of staying small and nimble. “Everyone says smaller hedge funds outperform over time, which is why we chose managers committed to staying small. This way, we don't have to worry about them getting too big,” she explained, underscoring the importance of selecting managers who can adapt to changing market conditions.
Overcoming Challenges in Hedge Fund Portfolio Construction
The panelists also discussed the challenges associated with hedge fund portfolio construction, particularly the risks of overconcentration and the importance of thorough due diligence. Amit Sahni highlighted a common pitfall: “higher concentration in positions can be a red flag. We've seen managers who perform well and get overconfident, leading them to make bigger bets, which can backfire”. He stressed the importance of monitoring these tendencies closely to avoid unnecessary risks.
The conversation also touched on the significance of transparency and trust in manager communications. Liz Hillman shared a personal experience where a manager's failure to disclose critical information about a portfolio holding led to their eventual dismissal. “Every quarter when I read the update on all the names in the portfolio, I never trusted that things were going as well as they were claimed, and so I finally had to pull the trigger,” she stated. This story highlighted the need for investors to maintain a vigilant approach, continuously scrutinizing the information provided by their managers.
The panelists agreed on the necessity of diversification across multiple strategies.”The breadth of investment options is crucial,” said Sahni, emphasizing that a manager's ability to access and capitalize on diverse opportunities is key to sustained performance. “Even a skilled manager with a high hit ratio needs a broad array of options to ensure success,” he added.
Looking Ahead: Strategic Optimism
As the discussion concluded, the panelists expressed cautious optimism about the future of hedge funds. While acknowledging the challenges posed by current market conditions, they remained confident in the ability of well-selected, diversified hedge fund portfolios to deliver strong returns. “Our group has done a lot of research on constructing portfolios with alternatives, and it's more important than ever to make the right adjustments,” Sahni noted, highlighting the importance of a strategic approach to portfolio construction.
The panel also touched on the impact of Environmental, Social, and Governance (ESG) factors on hedge fund investing. Mike Weinberg emphasized the growing importance of ESG considerations, stating, “At our firm, ESG is one of the four pillars we consider in every investment decision. Managers who are interested in capital from us cannot take this lightly.” This underscores the increasing relevance of ESG criteria in shaping the strategies of forward-looking hedge funds.
In a rapid-fire closing round, the panelists shared their predictions for the best-performing hedge fund strategies. The consensus leaned towards diversified, multi-strategy approaches, with a focus on areas like global macro and life sciences long/short equity. Despite some reservations about the potential performance of emerging markets and long-biased strategies, the overall mood was one of cautious optimism.
Insights Summary
This panel discussion provided valuable insights into the current state of hedge fund investing, emphasizing the need for diversification, rigorous due diligence, and strategic flexibility. As markets continue to evolve, the ability to adapt and select the right mix of strategies will be crucial for achieving long-term success in hedge fund portfolio management. Investors are encouraged to look beyond short-term trends and focus on the underlying drivers of value, ensuring that their hedge fund allocations are well-positioned to navigate both current and future market challenges.
Thank you for reading. Please feel free to leave any comments in our sign-up section.
The Beryl Consulting Group Editorial Team
BENEFITS AND CHALLENGES OF Alternative Data IN FINANCE
8/18/24
Application of Alternative Data: Enhancing the Investment Process
Alternative data is significantly enhancing the investment process. In the Beryl Elites Spotlight series video, “In What Ways are Alternative Data Utilized to Enhance the Investment Process?”, Mike Chen, Head of Next Gen Research at Rebeco, discusses three key ideas:
Valuing Intangibles: Alternative data provides innovative methods for assessing intangible assets. For instance, brand value can be gauged through social media sentiment analysis, customer reviews, and online traffic data, while patent value can be estimated by examining patent citations and technological relevance in research publications. These insights often offer a more real-time reflection of market perceptions compared to traditional methods.
Sentiment Analysis: Alternative data excels in capturing real-time sentiment from social media, news articles, employee reviews, and customer feedback. This helps companies understand brand health, customer loyalty, and employee satisfaction, and identify potential risks or opportunities not visible through financial metrics alone.
Limitations of Traditional Financial Statements: Alternative data fills gaps left by traditional financial data, offering a more nuanced and timely view of a company’s potential. For example, web traffic patterns or app usage data can indicate future sales trends, while satellite imagery can estimate agricultural yields or monitor retail store traffic.
Challenges in Using Alternative Data
Despite its advantages, alternative data comes with significant challenges. In the Beryl Elites Spotlight series video “What Challenges are Associated with the Use of Alternative Data?”, Daniel Sheyner, Senior Portfolio Manager at Chimera Capital Management, highlights several issues:
Over-Reliance on Historical Data: Many alternative datasets lack sufficient historical records, making it risky to rely solely on historical correlations and statistical models for forecasting.
Biases in Data: The COVID-19 pandemic exposed biases in data sets, such as regional, demographic, and sales channel biases. Traditional analytical methods may fail under unprecedented circumstances, leading to incorrect conclusions.
Confirmation Bias: Investors may seek data that supports their pre-existing investment theses, leading to flawed decision-making. It’s crucial for investors to remain intellectually honest and challenge their assumptions.
Data Complexity: Mike Chen from Rebeco emphasizes that alternative and traditional data are just data. To make sound investment decisions, it’s essential to consider multiple perspectives and not rely on just a few datasets.
Future Innovations: Standardization and Expansion
The next decade will see significant innovations in alternative data, particularly in standardization and data diversity, as discussed in the Beryl Elites Spotlight series video “What Innovation in Alt Data can We Anticipate Over the Next Decade?”:
Standardization: According to Daniel Sandberg, Ph.D., CFA at S&P Global, standardization in data collection and analysis will become more common, reducing the time needed for clients to evaluate data. As standardization progresses, data extraction capabilities will improve, becoming more precise and less noisy.
Broadening Data Types: Tony Berkman, Managing Director at Two Sigma, notes that the diversity of data types will continue to grow, especially in life sciences and B2B industries. Advances in technology will enhance the understanding and interpretation of complex data interactions.
Reputational Risk: As alternative data proliferates, reputational risk for companies will increase. Data analytics as a service is becoming a trend, helping investors without data analytics capabilities make better use of alternative data.
We are in a data renaissance, akin to the rise of hedge funds three decades ago. Alternative data have become essential for every investor, providing a competitive edge. While challenges like complexity and biases persist, advancing technology will make alternative data more accessible and vital across various fields.
Thank you for reading. Please feel free to leave any comments in our sign-up section.
The Beryl Consulting Group Editorial Team
Riding the AI Wave
8/16/24
AI has emerged as the tech world's latest trend, with billions poured into its development. However, skepticism grows as the gap between investment and revenue widens. For example, OpenAI has generated only $3.4 billion, far from the $600 billion needed to justify the hype. This raises concerns about whether AI is the next big tech revolution or just another bubble. While the potential is undeniable, the industry must navigate these uncertainties with caution and strategy.
One key driver of AI investment is FOMO—Fear of Missing Out. Companies are rushing to secure their place in this anticipated tech revolution, but the promised productivity gains have yet to materialize significantly. Interestingly, Goldman Sachs predicts a modest economic impact from AI, with only a 0.5% boost to U.S. productivity and a 0.9% increase in GDP over the next decade. Despite these lukewarm forecasts, tech giants continue to invest heavily, with global cloud vendors expected to spend $227 billion in 2024.
10 most popular AI modules are listed in the table below:
AI in Magnificent Seven
Microsoft (MSFT) remains a leader in AI, driven by its Azure cloud platform, which saw a 30% revenue increase, outpacing AWS. The AI-powered Microsoft 365 Copilot has boosted Office suite subscriptions and productivity by up to 40%, further enhancing revenue. Despite setbacks like the blue screen incident, Microsoft's deep AI integration across its ecosystem solidifies its dominance in productivity and enterprise solutions.
Apple (AAPL), one of the world’s most valuable stocks, stays in the spotlight even as Warren Buffett cut 50% of Berkshire Hathaway’s Apple holdings. Despite market volatility, Apple’s Q3 financial report exceeded Wall Street expectations in both revenue and earnings per share. Continued investment in AI R&D could drive future iPhone sales, potentially boosting overall performance.
NVIDIA (NVDA), a leader in AI tech, has seen its GPUs become vital for AI computing, contributing over 50% of its $33 billion revenue in FY2023. Its Omniverse platform, with 200,000 users, is a game-changer for creative industries. However, challenges like delayed Blackwell AI chips and a cooling AI market could weaken NVIDIA’s position.
Alphabet (GOOGL) uses AI to power its core business, particularly in advertising and cloud services. AI-driven algorithms boost ad revenue, which totaled $237.86 billion in FY2023. Google Cloud, generating $33.1 billion in revenue, also benefits from AI. Alphabet integrates AI into products like YouTube, Google Photos, and Google Assistant, enhancing user experience.
Amazon (AMZN) has integrated AI across its e-commerce and cloud computing businesses, strengthening its market leadership. AI enhances customer experiences and boosts efficiency in logistics, contributing to Amazon’s $412.1 billion in net revenue for FY2023. AWS offers AI tools like SageMaker, enabling businesses to innovate and scale.
Meta Platforms (META) relies heavily on AI to drive social media advertising, generating nearly all of its $140 billion revenue in FY2023. AI optimizes ad targeting and content distribution across platforms like Facebook and Instagram. Meta also uses AI for content moderation and invests in developing the Metaverse, opening new revenue streams.
Tesla (TSLA) has made significant strides in AI, particularly with its Full Self-Driving (FSD) system, contributing $1 billion to $3 billion in annual revenue. AI-driven automation in Gigafactories enhances production efficiency. Tesla also integrates AI into energy solutions like Powerwall, diversifying revenue streams, and continues to lead in AI innovation.
AI in Entertainment
AI is revolutionizing entertainment too. Netflix’s AI-powered recommendation engine saves around $1 billion annually by reducing customer churn. Bank of America’s virtual assistant, Erica, enhances customer service with over 2 billion interactions. These AI innovations boost user experiences and company profitability.
AI's impact on entertainment extends beyond customer interactions, transforming the creative process and boosting productivity. With AI predicted to automate 40% of routine tasks, creative professionals can focus on innovation, leading to faster production of high-quality content. The entertainment industry's AI investment, projected to reach $200 billion by 2025, underscores its potential to revolutionize content creation. However, challenges like data quality and infrastructure must be addressed. Companies that successfully integrate AI will enhance creativity, set new industry standards, and lead in a rapidly evolving landscape.
In conclusion, the AI wave is sweeping through industries, driven by high expectations and massive investments. However, the growing gap between spending and returns raises questions about whether AI will be a transformative tech revolution or just another overhyped trend. Like the dot-com bubble, AI's potential is vast, but realizing it requires careful strategy and foresight. Will AI be the blockbuster hit of the century or another overhyped sequel? The outcome remains to be seen.
Thank you for reading. Please feel free to leave any comments in our sign-up section.
The Beryl Consulting Group Editorial Team
Disciplined Investing
8/11/24
Since the Covid lows of 2020, U.S. markets—particularly technology companies—have seen significant growth. After a decline and pause in 2022, the markets rebounded strongly in the first half of 2024. The sentiment-driven nature of the market has been clear, with investors primarily fixated on two factors: Federal Reserve interest rate policy and AI. This focus has led to momentum-driven market gains, record levels of market concentration (e.g., the "Magnificent 7"), and an environment of irrational exuberance pushing perceived AI beneficiaries to remarkable heights.
In this climate, investors have largely overlooked other critical factors, losing sight of the importance of cyclical earnings resilience. There has been notable complacency about achieving a soft landing, with many underestimating the impact of sustained higher interest rates. Recently, however, investors have begun to recognize the long-term implications of these rates, likely spurred by the realization that pandemic-era savings had been propping up consumer spending for an extended period. As a result, a focus on companies with resilient earnings during economic downturns remains a wise strategy—one that has historically proven valuable during challenging times.
Investors have also been quick to chase popular AI stocks, often without fully considering how these companies can sustain their earnings growth over the long term or through a potential recession. In essence, investors have sprinted through the first 500 meters, forgetting that long-term investing is a marathon.
Now, it seems those early sprinters, who led the initial charge, have hit a wall and run out of steam. Concerns are emerging about the sustainability of AI spending, the ability of companies to convert this spending into tangible revenue, and the speculative nature of generative AI. The excitement over a predicted September rate cut has also waned, with investors now worrying that it may come too late to rejuvenate their favorite stocks. Just weeks ago, these same rate predictions were viewed as the fuel needed to push these names even further.
Throughout this period, disciplined quality growth investors have stayed focused on companies with durable business models that generate predictable streams of growing earnings. This method, though slower at the start, is akin to the steady training and proper fueling necessary for successful marathon running.
Recent signs indicate that the early sprinters are losing momentum, with Nvidia, the largest year-to-date winner, experiencing a significant sell-off over the past four weeks. The commitment to a disciplined approach, rather than chasing the latest trend, remains central.
This correction serves as a reminder: it's not about who is first out of the gate, but who finishes the race. Therefore, the focus on long-term investing—centered on resilient companies with highly predictable earnings growth—remains crucial. Like marathon runners who consistently build their endurance, this approach aims to compound clients’ capital over the long haul. Happy investing!
Thank you for reading. Please feel free to leave any comments in our sign-up section.
The Beryl Consulting Group Editorial Team
TikTok’s Case - Harnessing AI as Service
8/5/24
In the bustling world of social media, TikTok emerged, captivating millions with its short, snappy videos. Initially known as Douyin in China, TikTok faced the challenge of engaging users amidst numerous social media options. By partnering with top AI service providers, TikTok leveraged advanced machine learning algorithms to analyze user behavior and deliver highly personalized content, creating an addictive user experience.
How TikTok Utilizes AIaaS
Scalability: Ensures smooth performance during viral spikes.
Fun Features: Provides easy-to-use video effects and editing tools.
Moderation: Enhances content safety through AI-driven moderation.
Monetization: Delivers targeted AI-powered ads.
The flexibility of AIaaS allowed TikTok to continuously update and refine its features, staying ahead of trends and keeping the platform fresh and engaging. TikTok transformed from a newcomer to a global sensation with over a billion users. Its success story highlights the power of AIaaS in delivering personalized, engaging, and ever-evolving user experiences, setting new standards in digital engagement.
Buy vs. Build: Integrating AI into Business
Businesses must decide whether to buy AI capabilities from third-party providers or build them in-house. Buying AI capabilities involves lower upfront costs and predictable subscription models, making it an attractive option for many companies. It allows for quick integration and faster deployment, enabling businesses to respond swiftly to market demands. However, these pre-built solutions may not perfectly align with a company's unique requirements, and reliance on third-party providers can introduce dependency issues.
On the other hand, building AI capabilities in-house requires substantial investment in infrastructure, tools, and talent, leading to higher initial costs. Developing AI from scratch can delay time to market, but it offers the advantage of highly customized solutions tailored to specific business needs. Companies that build in-house retain full control over their AI systems, data, and processes, which can be crucial for handling sensitive information and making quick adjustments or innovations.
Overall Suggestions for Companies
For companies considering AI integration, the choice between buying and building hinges on several factors. If cost-effectiveness, scalability, and a lack of in-house resources or expertise are primary concerns, buying AI capabilities from third-party providers is likely the best option. This approach offers a pragmatic solution with lower upfront costs and faster deployment times.
Conversely, businesses with specialized needs, sensitive data to manage, and a long-term vision for AI development might find building in-house more advantageous. Although this path requires a significant initial investment and a longer development timeline, it provides the benefit of creating highly tailored solutions with full control over AI systems. This autonomy can be vital for maintaining data security and ensuring the AI evolves in alignment with the company’s strategic goals.
TikTok’s journey from Douyin to a global sensation underscores the transformative power of AIaaS, setting a benchmark for businesses aiming to deliver personalized and engaging digital experiences.
Citation:
OpenAI. (2024). An engaging illustration showing the journey of TikTok's evolution powered by AI as a Service (AaaS). Generated using DALL-E.
OpenAI. (2024). An illustration depicting two anthropomorphized characters representing 'Buying from Third-party' and 'Building AI Capabilities In-house,' highlighting quick deployment and experience development.
OpenAI. (2024)An illustration with two anthropomorphized characters representing 'Buy AaaS' and 'Build In-House,' showcasing money savings, scalability, data security, specialized needs, and long-term planning.
Mention, J. (2023). The Evolution of TikTok: From Musical.ly to a Global Phenomenon. LinkedIn. Retrieved from https://www.linkedin.com/pulse/evolution-tiktok-from-musically-global-phenomenon-mention-marketing/
Thank you for reading. Please feel free to leave any comments in our sign-up section.
The Beryl Consulting Group Editorial Team
Is China's Economy in Trouble?
7/29/24
Opinions are divided, with some arguing that China has a long-term plan while others are more pessimistic.
For those who believe China’s economy is struggling, consider this: China's economy is projected to grow by 5% this year, one of the highest rates among major economies, second only to India, which is expected to grow at 6.8%. Additionally, the People’s Bank of China (PBOC) cutting interest rates is a typical central bank move to stimulate specific sectors, such as real estate. This should not be surprising unless it is meant to create the impression of a slowing economy when it is growing.
Supporters of China's economic health point out several factors: (1) China has advanced in many technological areas, surpassing the West in some instances; (2) it is the world’s largest exporter, with $3.38 trillion in exports in 2023, $2.557 trillion in imports, and a trade surplus of $823 billion; (3) its GDP in 2024 is $35.291 trillion based on purchasing power parity (PPP), compared to $28.78 trillion for the US, making it 23% larger.
However, the PBOC recently cut the 7-day reverse repo rate for the first time in about a year, followed by a reduction in the key lending benchmark rate (5-year Loan Prime Rate). Consequently, interest rates for repo transactions using Chinese government bonds have hit a record low of 1.70%, and medium-term loan rates are now below 4%, another record low for China (See chart below).
So why is China relentlessly cutting interest rates?
The reason lies in the aftermath of the Great Financial Crisis. While the West repaired its private balance sheets, China’s private sector debt as a percentage of GDP surged past 200%. Historically, such high debt levels have led to crises (e.g., the Japanese real estate bubble, the Spanish/Irish housing crisis, Asian financial crisis). To counter declining growth, China initially increased corporate sector leverage and, in the last decade, encouraged households to join the housing market boom with cheap mortgages. However, Xi Jinping’s administration decided to slow down this leverage, leading to trouble for property developers and a frozen housing market.
Similar to Japan in the 1990s, China is attempting to address the problem by lowering interest rates. However, a heavily leveraged private sector, already struggling with a deleveraging housing market, may not be inclined to take on more credit just because of lower rates. Japan's experience in the 1990s, where slashing rates from 8% to 1% did not revive credit demand, serves as a cautionary tale.
Is China in trouble and applying the wrong policy? Could lower interest rates increase pressure on their currency? What are your thoughts?
Thank you for reading. Please feel free to leave any comments in our sign-up section.
The Beryl Consulting Group Editorial Team
U.S. National Debt: A Looming Catastrophe or Persistent Bubble?
7/22/24
The U.S. national debt, now nearly $35 trillion, has been labeled a looming catastrophe by many prognosticators for decades. Despite numerous warnings, this fiscal challenge feels increasingly like a scene from "Waiting for Godot," with no clear resolution in sight. The government projects at least $2 trillion in deficits annually, while revenues hover around $5 trillion per year. A staggering 100% of this revenue is consumed by Social Security, Medicare, Medicaid, and interest on the debt. Interest payments alone exceed $1 trillion annually, accounting for over 20% of government revenue.
Beyond these mandatory expenses, an additional $2 trillion per year is required to fund defense and other government departments, which are not facing cuts. There are also additional expenses, such as aid to Ukraine and other off-budget war funding. Despite these mounting problems, the dollar remains stronger than it was in the 1970s against most currencies, with exceptions for fiscally sound nations like Switzerland. While logic suggests these bubbles should eventually burst, they persist far longer than expected.
There are many theoretical ways this fiscal situation could resolve, some more drastic than others. For instance, if the U.S. fully embraces an "America First" agenda and terminates alliances, key countries in Europe, Asia, and the Middle East might align with China. This shift could lead to these countries ceasing their purchase of U.S. debt, adopting the Chinese yuan as their currency of choice, and purchasing Chinese bonds. Such a scenario could precipitate a rapid decline in the dollar, a significant rise in U.S. inflation, a major bear market in U.S. equities, and a decline in the U.S. standard of living, potentially resulting in extreme social unrest.
Alternatively, the U.S. position could unwind more gradually, allowing for more favorable outcomes. The persistence of the current fiscal situation and the strength of the dollar, despite logical predictions to the contrary, suggests that bubbles in the financial system can last far longer than anticipated. In conclusion, while many have predicted an imminent catastrophe due to the national debt, the outcome remains uncertain. Whether the resolution is abrupt and severe or gradual and manageable, only time will tell.
National Debt By Year
Thank you for reading. Please feel free to leave any comments in our sign-up section.
The Beryl Consulting Group Editorial Team