By Sheheryar Parvaz
Writing this to address a common misunderstanding. Solana's MCP proposal does not solve scaling. It doesn't even claim to! Yet this misunderstanding persists. MCP solves a fundamentally different problem; crucial, but distinct from horizontal scaling, which is what centralized ledgers with bridging attempt to solve. In line with @dima_null's two chain thesis, I believe both approaches will coexist and be chosen based on the specific application's safety/liveness requirements.
MCP enables multiple parallel "inclusion sites", i.e. geographically distributed servers from which transactions can be submitted for inclusion in the next slot. Upon hearing the same news, a trader in New York and a trader in Tokyo can send their transaction to their closest proposer, for the same slot; for the same logical point in time. Ordering is decided post-inclusion, with execution done after the proposers' blocks are aggregated. In expectation, this system is fair globally wrt latency. Arguably, fairer than traditional financial systems, which would only have one centralized inclusion site per exchange. Latency varies geographically.
But, despite more proposers, this doesn't imply more scale. MCP doesn't remove the existing bottlenecks, such as network bandwidth (for propagating blocks to all validators) and per-node compute (for re-executing blocks). The amount of data and compute each validator needs is unchanged, so the total transaction throughput remains unchanged. It does, in expectation, lower the median latency for all users. So, not more scale, but much fairer inclusion. This is good! It prevents some MEV, reduces latency variance, and distributes some workload across validators. Overall, the chain is better off for it.
Ledgers with centralized execution, on the other hand, will use execution proofs and data availability services to overcome compute and network bottlenecks, respectively, with independent execution for each instance. This is where we get horizontal scaling; as the number of sequencer nodes increases, so does the overall transaction throughput. Centralized sequencers, in turn, also enable colocation for having minimum possible latency, and have the lowest possible latency variance. Some applications need this to achieve usable UX. With forced inclusion, we can restore some liveness guarantees to alleviate some issues. The applications that prefer the centralized execution approach will be different, and arguably more common. Developers can, and should explore the tradeoff space.
Specializing to the execution environment also allows alternative fee mechanisms. A decentralized system with a general execution environment must accept all transactions for liveness guarantees. This creates a disincentive for making markets efficient if the expected gain is lower than the fee. Whereas, by tailoring the fee mechanism to the specific program, we can have alternative fee mechanism (e.g. trading fees) instead of a flat per-operation fee, creating better UX.
So, two architectures: multiple concurrent proposers for high liveness, low-ish median latency, low finality; centralized execution for colocation, creative fee mechanisms, ideal UX, horizontal scaling. Not independent, but rather building and composing together.