ZeroKnowledge STARK RollUps  The idea
This article has been produced for, and is originally published by Blocksmith. We are a research, strategy and product studio providing dropin blockchain and web3 specialism to organisations looking to create new business models, transform the way they operate and increase efficiency through the application of Blockchain technology. For more information or to get in touch with their team, visit their website at Blocksmith
Introduction
In the following article, we shall explore one of the most potent, albeit frequently misconstrued, cryptographic instruments devised, the ZeroKnowledge STARK protocol.
Note: This short writing is part of an ongoing series explaining blockchains devices to sustain scalability. You can read about other scaling methods on our “Scaling the Blockchain” blog post.
Transparency, while maintaining privacy and confidentiality, is the prominent proposition of blockchain technology. Privacy and confidentiality in any social context induce a certain level of trust. Most trust management solutions in the blockchain landscape have relied on the concept of "consensus," which primarily involves delegating the task of verifying a computation to a pool of verifiers. If a significant majority of these verifiers approves the computation, it is deemed trustworthy, i.e., backed by the community and written into a new block on the blockchain. However, with this consensus protocol come some considerations  here are just a few:
 First, can the verifiers be trusted?
 Second, since all participants must perform the computation, the approach may be cumbersome and lacks efficiency.
 Third, it may require exposing sensitive data processing to the pool of verifiers for reproduction of the computation.
Hence, for most of blockchain history, this approach falls short of achieving a privacycentric ecosystem that is amenable to largescale bureaucratic adoption. The blockchain may not be delivering what it initially promised. In recent years, some interesting technological applications have been developed by various teams, and we may have a path towards this privacyfocused endeavor with the L2 RollUp proposal, particularly the ZeroKnowledge kind.
Think of Rollups as straightforward protocols (a set of defined and measured steps) engineered with specific concerns in mind (e.g. original consensus setup, increased security versus transparency, and various other case scenarios/preferences, etc.), aimed at solving particular issues, i.e., producing an interesting outcome for the considerations we aiming to tackle. Rollups are socalled offchain scaling solutions in the sense that they help reduce congestion on Layer1 (the main chain) while enabling it (Layer1) to maintain its role as the keeper of truth in the blockchain ecosystem. A typical rollup protocol executes as follows:
 users send transactions to offchain system operators,
 These operators aggregate / consolidate thousands of transactions into a single batch / as a single processing unit.
 These batches are executed/processed/computed offchain by whatever subprotocol,
 A proof of processing is generated as a result, attesting the accuracy and correctness of all transactions within the batch.
 This proof is then submitted back to the main blockchain as minimal summary data representing a state change in the ledger.
To date, there are two major types of blockchain rollups – ZeroKnowledge rollups and Optimistic rollups. The focus below will be on the former. You can read about Optimistic Rollups here.
The notion of the 'zero knowledge' protocol was first proposed in the 1980s by MIT researchers (source). This concept was not entirely new, as these researchers were working on problems related to interactive proof systems. The primary concern that gave rise to this proposal was as follows: What if we can't trust the pool of verifiers, i.e., those responsible for verifying that the submitted data is actually correct (coercion, biassed interests, etc.)? Hence, the core, intriguing aspect of the protocol they developed is this:
** One individual can prove to another individual that something is valid, without having to disclose any information aside from the validity of the statement (the proof) .**
Like magic! In other words, one can check the correctness of some computations without knowing what was executed. Hence, the verifiers have zero knowledge of what they are verifying but can confirm that whatever passed through is a correct computation (i.e., it wasn't tampered with). Although there is a multitude of zero knowledge protocols in existence, in practice, three main zero knowledge protocols have emerged and have been adopted by the blockchain community in recent years to generate validity proofs. These are:

the ZeroKnowledge Succinct NonInteractive Argument of Knowledge A.K.A [ ZKSNARK ]

the Zeroknowledge Scalable Transparent Argument of Knowledge A.K.A [ ZKSTARK ] protocols.

the ZeroKnowledge BulletProof
Like any protocols, each comes with their own strength and limitations. Here we will focus on the second one, the ZKSTARK protocol.
Disclaimer: Keep in mind that the details of these protocol concepts can be quite intense. We will provide a simplified explanation to make it easy for anyone to understand what it is and how it works in theory. We apologise to anyone who may find the upcoming simplification and generalisation of some concepts to be too basic. For more indepth information, please check out the Further Reading section. Also, stay tuned to read about SNARKs.
ZKSTARK
⛔ The ZKSTARK or ZeroKnowledge Scalable Transparent Argument of Knowledge. ( by EliBen Sasson, professor at the TechnionIsrael Institute of Technology )
First, let's review the composite taxonomy.
S – Scalable. adj. In comparison to something. This is relative, specifically versus its other rollup form, the ZKSNARK. The term "scalable" refers to the capability of generating succinct proofs for complex computations, thereby enabling efficient verification of large amounts of data. Both the proving time and verification time (verification time is polylogarithmic in size) for a ZKSTARK increase at a slower rate than ZKSNARKs when computational requirements increase (processing amount), which results in better scalability.
T – Transparent. adj. The term "transparent" refers to the ability for anyone to publicly verify the validity of proofs without revealing any underlying information or details regarding the computation or transaction. Also, and most importantly, ZKSTARKs do not require an initial trusted setup of interacting parties/actors/ecosystem of executants (in contrast to ZKSNARKs). We will discuss this last point again below.
ARK – acronym. the specifics of the Argument of Knowledge approach of the protocol. The term "argument of knowledge" refers to the ability of the zeroknowledge proof protocol to demonstrate that the prover of the proof has knowledge of something of value without revealing any information about the value itself. This is achieved through the use of mathematical algorithms and cryptographic techniques that allow the prover to create a proof that can be verified by the verifier without revealing any additional information beyond the fact that the prover has knowledge of the secret value, the Argument of Knowledge.
Second, let’s keep in mind a few concepts:

Negligible probability — probability low enough that for all practical purposes it’s not worth worrying about.

Digital commitment scheme. A commitment scheme allows one party to ‘commit’ to a given message while keeping it secret, and then later ‘open’ the resulting commitment to reveal what’s inside. They can be built out of various ingredients, including, in our case, cryptographic hash functions.

Cryptography pillars : The first rule of modern cryptography is never to trust people who claim things without proof. Goldwasser, Micali and Rackoff proposed three following properties that every zeroknowledge protocol must satisfy. Stated informally, they are:
 Completeness. It can be proven with a proof  The protocol will eventually convince me (with a negligible error probability), provided we run it enough times.
 Soundness. Whomever participating can’t cheat the proof  I will detect any treachery with overwhelming probability, hence a negligible probability of false proof.
 Zeroknowledgeness. I have no idea about what solution is used  I just know it’s correct.

Polynomial:

Polynomial definition: A polynomial is essentially a sum (indicated by "poly") of monomials (e.g., 3x). It is a mathematical function characterized by powers and coefficients, typically taking a form like this: “−6y² + 3xy²z − 0.1xz − 200y + 0.5.”

Polynomial representation: On an XY axis, imagine a vector composed of points. Polynomials are valuable because their graphical representations are highly sensitive; even the slightest alteration in the polynomial function (reflecting a change in computed data) can result in significant changes in the vector. This property is utilized for error amplification. The process is as follows: Suppose there is a computation resulting in a specific polynomial vector representation, and it is declared correct. As a verifier, you would challenge this representation by requesting various coordinates. All responses must align with that vector. If they do not, there is a high likelihood of an error.

Polynomial Degree: The degree of a polynomial is a measure of its highest exponent or power. In simple terms, it represents the highest power to which the variable in the polynomial is raised. For example, in the polynomial equation f(x) = 3x^2 + 5x + 2, the highest power of x is 2. The degree of a polynomial is an essential characteristic that helps in understanding and analyzing its properties, solving equations involving the polynomial, and determining the polynomial's overall behavior in various mathematical and practical contexts. In the context of blockchain (and in general math), it is important to note that lowdegree polynomials are simpler and easier to work with than polynomials of high degree. They have fewer "degrees of freedom"  that is, fewer values that can be independently chosen without affecting the rest of the polynomial. By restricting the degree of a polynomial, it effectively limits the prover's ability to construct a false claim that nevertheless satisfies the equation.

sources: https://vitalik.ca/general/2017/11/09/starks_part_1.html
Boundary Constraint Checking: This refers to the process of verifying whether a polynomial satisfies certain constraints or conditions at specific boundary points or intervals. These constraints are often defined to ensure that the polynomial behaves as expected within a given range or context.
Merkle Trees: is a type of binary tree comprised of a set of nodes, with each node containing a cryptographic hash. The tree is constructed from the bottom up. 1. Leaf Nodes: The leaf nodes at the bottom of the tree contain hashes of the data blocks (for instance, transaction data in the case of a blockchain). 2. NonLeaf Nodes: Each nonleaf node is a hash of its child nodes. For example, in a binary Merkle tree, each nonleaf node is the hash of the concatenation of its two children nodes. 3. Root Node: The root node, or Merkle root, is the hash of its child nodes, and it represents a summary of all the data in the tree. If even one bit of the data in the tree changes, the Merkle root will change, making Merkle trees useful for data integrity checks.
sources: https://en.wikipedia.org/wiki/Merkle_tree
The STARK scenario
A prover executes a set of transactions (e.g., consolidates them). It then goes through a multiparty (proververifier) STARK protocol to generate a single proof of the processing. Importantly, no information about the actual transactions is disclosed; only the resulting proof is shared.
Mathematical Representation of the Concept: The concept can be concisely summarized with the following algebraic expression:
f(x) = y; given z as the proof
The function f(x) represents a computation that takes a substantial amount of time for the prover to complete. In this process, x symbolizes the input or the dataset that the prover aims to obfuscate. The outcome of this computation is the output y, and the prover additionally provides a proof of execution, designated as z. Conversely, the verifier is capable of verifying this proof rapidly, typically within milliseconds. Through this verification process, the verifier is assured that y is indeed the correct result of the computation, yet remains unaware of the specific details of x.
The STARK protocol
Oneliner Version of the STARK Protocol:
Program Input → Function Execution → Algebraic Expression → Sequence of Polynomials → Low Degree Single Polynomial Generation → Cryptographic Encryption → Proof Generation
Succinct Version of the STARK Protocol:
 Define the Computation: Specify the computational task to be performed.
 Encode Using AIR: Translate the computation into a sequence of state transitions using Algebraic Intermediate Representation (AIR).
 Set Constraints: Establish a set of constraints that validate these state transitions.
 Formulate Polynomial Equations: Convert these constraints into polynomial equations.
 Sequence of Polynomials: Represent the entire computation as a series of polynomials.
 Apply FRI Protocol: Use the Fast ReedSolomon Interactive Oracle Proof of Proximity (FRI) protocol to verify the lowdegree nature of these polynomials.
 Encode into FRI Layer: Translate the sequence of polynomials into the FRI layer, creating a Merkle tree structure.
 Recursive FRI Application: Iteratively apply the FRI protocol to ultimately obtain a single lowdegree polynomial.
 NonInteractive Proof with FiatShamir: Utilize the FiatShamir heuristic to transform the proof into a noninteractive format by replacing verifier challenges with hashes of the prover's previous messages.
 Merkle Tree Creation: Form a Merkle tree from the final sequence of polynomials, with the Merkle root acting as a cryptographic commitment to the computation.
 Generate Final Proof: Produce the conclusive proof, incorporating the Merkle root, FRI commitments, and coefficients of the final polynomial.
 Proof to Verifier: Send this proof to the verifier, who can independently check it without further interaction with the prover.
The Detail version of the STARK Protocol:
Here we visit in more details the steps mentioned above.
Step 1: Define the Computation
1.1 Program: This is the specific computation that want to prove has been executed correctly. In blockchain contexts, for example, it could involve consolidating transactions that are claimed to have been processed validly.
1.2 Function: The program is translated into a function. This function accepts certain inputs (such as the initial state of the transactions and a secret key) and generates an output (like the final state posttransactions). Note: The function is designed in a way that makes it virtually impossible to deduce the input from the output, ensuring the security and integrity of the process.
1.3 Arithmetic Expression: To align the function with the requirements of the cryptographic proof system, the function is converted into an arithmetic expression. This expression is a blend of numbers, variables, and mathematical operations (like addition, subtraction, multiplication, and division), operations combined into a comprehensive expression that symbolizes the entire computational process.
Step 2: Encoding Computation as a Sequence of State Transitions
The protocol then encode the computation as a sequence of state transitions using Algebraic Intermediate Representation (AIR). This is essentially the definition of a set of constraints that validate the state transitions. These constraints are then translated into polynomial equations, which represent the arithmetic computation as a sequence of polynomials. Here's a highlevel overview of how the AIR encoding works:
2.1 Computation: First, you would have some arithmetic expression representing the computation to be proven correct.
Example: The computation could be multiplying 3 by 4, claiming the result is 12.
2.2 State Transition: The computation is then expressed as a sequence of states, where each state represents the system at a specific point in time. The computation is thus represented as a series of transitions from one state to the next.
Example: Start with an initial state of 0 (before multiplication), transitioning to a state of 12 (postmultiplication).
Step 3: Defining Constraints: The valid state transitions are then defined by a set of constraints. Each constraint is a mathematical equation that takes a state (or a few states) as input and produces a Boolean output  true if the state(s) satisfy the constraint, and false otherwise. The constraints are designed so that they are satisfied if and only if the state transitions represent a correct execution of the computation.
_Example: The constraint that defines valid state transitions might be new_state = old_state + 3 _ 4. This constraint is satisfied if and only if the state transitions represent a correct execution of the computation.*
Step 4: Polynomial Representation: These constraints are then translated into polynomial equations. Each constraint becomes a polynomial that equals zero when the constraint is satisfied.
Example: We can represent this constraint as a polynomial equation like new_state  old_state  3 x 4 = 0.
Step 5: Sequence of Polynomials: The computation is therefore represented as a sequence of polynomials, with each polynomial corresponding to a time step. The coefficients of these polynomials represent the state at each step, and the sequence of polynomials represents the state transitions.
Example: The sequence includes 0 = 0 for the initial state. This is like saying, "at the start, we have nothing, or zero.". Before the computation (the multiplication of 3 by 4), the state is zero. Then, for the transition to the final state (after the multiplication), the polynomial would be 12 = 0 + 3 x 4. Starting from an initial state of zero (0), and after the computation, reaching a state of 12, which is the product of 3 and 4.
Verification Process: To validate the computation, a verifier would simply checks each polynomial in the sequence against the constraints. This is done by substituting the coefficients of the polynomial into the constraint equations and verifying they equal zero.
Example: For verification, confirm that 12  0  3 x 4 equals zero, thereby validating the transition from 0 to 12 as a correct execution of the multiplication.
At this stage, we have polynomials that represent the computation that the prover is claiming to have performed. Polynomials are used because they can be evaluated very efficiently, and certain mathematical tricks can be employed to create the proof from them.
Step 6  Step 11: All these steps are intricately connected in the rollup process, so let's attempt to present them in a more sequential manner.
Now, the protocol proceeds to encode the sequence of polynomials into the FRI layer interactively and iteratively. This is where the ZKSTARK magic happens, and potentially the most difficult part of the solution. The solution makes use of two protocols or techniques: a Fast ReedSolomon Interactive Oracle Proof of Proximity (FRI) with FiatShamir heuristic.
 The FRI protocol is responsible for reducing the degree of the polynomial sequence in the process and enabling lowdegree testing. This typically involves recursive steps where the prover and verifier interact, with the prover provides commitments and the verifier requesting evaluations at specific points of the polynomial vector representation.
 The FiatShamir heuristic is a method used to transform interactive proof systems (proververifier) into noninteractive ones by replacing the verifier's challenges with random hashes.
Let’s explain :
The FRI protocol begins with the sequence of polynomials obtained from previous steps, which are usually represented either as coefficients or as evaluations at specific points. Initially, these polynomials can be of high degrees. In STARKs, the FRI protocol typically involves an interactive process between a prover and a verifier, structured into three key phases: Commitment, Challenge, and Response. It is a recursive process the following occurs at each iteration:

Generation of Commitments: The prover, who is trying to convince the verifier that they know a particular piece of information, starts by performing a computation, derived from the evaluations of the polynomial sequence at specific points, potentially up to about a billion points, depending on the complexity and requirements of the computation, and creating a series of commitments to this computation. The nature of the commitment can vary based on the protocol details and how the polynomials are utilized in proof generation. Generally, in polynomial commitment schemes, this might involve evaluating the polynomials at a confidential point, known only to the prover. For instance, if the prover possesses a secret point s, they would evaluate the polynomials at s to obtain values like P(s) and Q(s). These are essentially a summary of the results that he can’t draw back, irreversible and cryptographically secure. These commitments are essentially cryptographic hashes, similar to generating root hashes in Merkle trees. This commitment serves as the basis for the forthcoming interaction, effectively commiting to the current state of the polynomial sequence, which statement plays a crucial role in the subsequent challenge and response phases, forming the foundation upon which the integrity of the proof is constructed and verified. then communicated to the verifier, constituting the commitment.

Challenges in the FRI Protocol using FiatShamir Transformation: In the traditional interactive proof system,, the verifier then issues random challenges based on this commitment. The challenge typically requires the prover to provide additional details that can validate the computation, very much like the evaluations of the polynomial (now commitment) at specific points. However, in the context of the FRI protocol, particularly within STARKs, the FiatShamir heuristic transforms this into a noninteractive process:

Instead of an external verifier sending challenges, the prover themselves generates these challenges using a deterministic yet unpredictable process: This is achieved by applying a cryptographic hash function (commonly SHA256 or SHA3) to the current commitment or other relevant information. The prover's interaction with the verifier is simulated, eliminating the need for direct, realtime communication. The hash function acts as an "oracle," ensuring that the prover cannot influence the challenge. The output of this hash function serves as the new random challenge for that round. It appears random and unpredictable, fulfilling the same function as a challenge from an external verifier. Example: In a given scenario, applying the hash function to the computation context might yield a new random value r. This value becomes the challenge to which the prover must respond.

Through the FiatShamir transformation, the FRI protocol within STARKs ensures that the challenges remain robust and unpredictable, maintaining the integrity and security of the proof generation process. The role of the actual verifier is thus limited to verifying the proof after its generation, rather than actively participating in the proof generation process. This noninteractive nature greatly enhances the scalability and practicality of the system by allowing proofs to be independently verified without ongoing interaction (no trusted parties)


Response Computation: The prover responds to this challenge by providing the requested information. These responses are crafted to convincingly demonstrate the prover's knowledge of the necessary information revealing portions of the computation in a way that maintains the secrecy of the underlying data. The key objective is to assure any potential verifier of the proof's validity and its adherence to the required properties. Example: For a given challenge, the prover evaluates the polynomials at a new point s + r, producing values like P(s + r) and Q(s + r). These evaluations form the response.
The process continues recursively, with the prover engaging in further rounds , commitments, challenges, response computations, and importantly, of degree reduction. Note that in each iteration, the prover generates a new polynomial sequence that has a lower degree (involves logarithmic reductions) compared to the previous sequence. This reduction is achieved by dividing the polynomials in the sequence by specific factors and discarding higherdegree terms. The iterations continue until the degree of the polynomial sequence reaches a predetermined lowdegree threshold ( The actual degree reduction depends on the specific requirements and parameters of the ZKSTARK implementation) or until a desired level of succinctness is achieved. The FRI protocol eventually produces a single lowdegree polynomial as the output. This allows the prover to demonstrate that the polynomial sequence satisfies a "proof of proximity" property. The proof of proximity property means that the polynomial sequence, when evaluated at random points, exhibits characteristics similar to those of a lowdegree polynomial, ie. demonstrating that the polynomial sequence satisfies the desired properties. By the end of the FRI protocol, a lowdegree polynomial is obtained, and the prover possesses the necessary information to prove its lowdegree nature.
The resulting polynomial, along with the commitment, challenge, and response made during the FRI iterations, forms part of the comprehensive proof that is eventually disclosed by the prover, allowing any independent verifier to examine and validate these elements. By any verifier. It can be shared or published, and anyone can independently verify its validity by checking the adherence to the specified rules and constraints. The verifier's task is to check if P(s + r) = Q(s + r) holds true. If this equation is satisfied, the verifier accepts the proof; otherwise, it is rejected. The verifier then assesses the response against the commitment and challenge. If the response aligns with both, the verifier accepts the proof, thereby confirming the polynomial's low degree and the computation's integrity.
By the end of this step, the sequence of polynomials is encoded into the FRI layer, and through the application of the FRI protocol and the FiatShamir heuristic, the protocol ensures a compact, efficient, and noninteractive rollup system. Then it's about encrypting the outcome and producing a proof
Encryption Stage: Following the completion of the FRI protocol, the prover enters the encryption phase, leading to the formulation of the definitive zero knowledge proof. In essence, the process entails the transformation of a computation — represented by the single low degree polynomial — that is encrypted, into a format that allows for an auditable commitment, thanks to homomorphic encryption techniques. Homomorphic encryption is a method that allows for computations to be performed directly on encrypted data, eliminating the need to decrypt it first  hence permits the preservation of specific attributes intrinsic to the original polynomial, even in its encrypted state. For instance, consider the original polynomial P(x). When encrypted, it becomes E(P(x)). The prover can then perform operations like E(P(x)) + E(P(x)) or E(P(x)) _ E(P(x)). The results of these operations on the encrypted polynomial will be equivalent to performing P(x) + P(x) or P(x) _ P(x) on the original, unencrypted polynomial. This approach is pivotal for maintaining privacy. It allows the prover to confirm the correctness of the polynomial's computation without disclosing its specific components. allowing the prover to authenticate the accuracy of a polynomial without ever revealing its constituent elements the original high degree polynomial remains concealed, ensuring the privacy of the underlying data is upheld throughout the verification process.
Proof Generation: Involving Merkle Trees: After employing a series of cryptographic primitives and homomorphic encryption techniques to process the low degree polynomia, the prover generates a sequence of polynomial evaluations. These evaluations encapsulate the complete computation. To securely commit to this sequence of evaluations, in typical cases the prover uses a Merkle tree. This tree structure effectively aggregates the polynomial evaluations. The root of the Merkle tree, known as the Merkle root, acts as a compact yet comprehensive commitment to the entire sequence of evaluations. It also encompasses additional data such as the sequence of FRI commitments and the coefficients of the final polynomial. This Merkle root is then included in the final zkSTARK proof. It comes as a as large numbers or strings of hexadecimal and might look something like this:
5e884898da28047151d0e56f8dc6292773603d0d6aabbdd62a11ef721d1542d8
This format significantly reduces the amount of data that needs to be stored on a blockchain. This contributes to improved scalability and efficiency, making the system more practical for widespread use. Added to this Merkle root, the composition of the final proof contains generally the following:
 Commitments: These are cryptographic commitments to the values that the prover has committed to during the proof generation process. These might include commitments to the trace polynomials and/or the constraints, as well as other intermediate computations.
 Proof of Proximity or Proof of Consistency: This proves that the polynomial evaluations committed to in the Merkle root are consistent with the constraints of the computation. This typically involves a proof of lowdegree testing, from the above Fast ReedSolomon Interactive Oracle protocol.
 Proof of Correctness: This proves that the prover performed the computation correctly. This might involve additional commitments and responses, depending on the specific protocol.
 FiatShamir Hashes: The challenges generated by the prover during the FiatShamir transformation.
This proof is a selfcontained, a noninteractive proof, meaning it doesn't need any further interaction from the prover to be verified. It is typically posted directly back to the blockchain.
Once the proof is posted on the blockchain, any participants can independently verify the proof. The verification process involves checking the various components of the proof ( the above mentionned, and running through the Merkle tree ) to ensure that they satisfy the required properties and constraints of the computation being proven. The entire purpose of the protocol, is that this verification process can be done efficiently, relatively quick even for large and complex computations, and typically scales logarithmically with the size of the computation or the batch of transactions. For example, for a batch of thousands of transactions, the total verification time might be on the order of a few seconds, but this can vary depending on a variety of factors (depending on the nature of the computation or transactions, and the computational power of the verifier's hardware, etc.). Verification times on the order of milliseconds per transaction are not uncommon.
Benefits vs Pains
The use of ZKSTARKs allows for the succinct proof of claims, eliminating the need for a verifying entity to reexecute the computation. Instead, the entity can simply validate the proof, making the computational task much simpler. Additionally, the proof is publicly verifiable, and there is no requirement for interaction with the prover.
**The advantages: **
Quantum Resistance: ZkSTARKs are considered quantumresistant due to their reliance on strong cryptographic primitives.  aka relying purely on hash functions, which are believed to be secure against quantum computer attacks. This low cryptographic assumption makes them a more futureproof choice compared to other cryptographic methods.
Independence from Trusted Parties: Unlike ZKSNARKs, zkSTARKs do not require trusted setup or trusted parties for their operation. This independence enhances their transparency and security, as they rely on publicly verifiable randomness and clever information theory patterns to establish the system. The absence of a trusted setup eliminates certain vulnerabilities and potential points of failure.
Scalability: ZkSTARKs are scalable in nature ( vs SNARK ), which is a significant advantage. As the amount of data being processed increases, the increase in the protocol runtime is not proportional (or linear), making it more efficient for handling largescale computations and datasets.
Faster Proof Generation: The process of generating proofs in zkSTARKs is typically quicker than in ZKSNARKs. This speed in proof generation contributes to the overall efficiency and practicality of zkSTARKs in various applications, particularly in blockchain and other decentralized technologies where quick verification is crucial.
The inconvenient:
Large Proof Size: One of the primary drawbacks of zkSTARKs is the relatively large size of the proofs they generate. While ZKSNARK proofs are relatively small (around 288bytes), zkSTARK proofs are significantly larger (around 4050 KB) due to collisionresistant hash functions. This means it takes longer to verify them, resulting in more onchain data storage and increased costs in terms of gas. However, it's essential to note that the size of the proof doesn't directly dictate how long it will take to generate it; the type of computation also plays a role. And critically, both the proving and verification time of a STARK increase at a slower rate than zkSNARKs when computation requirements (data load processing) increase, resulting in better scalability.
Verification Time and OnChain Data Storage: The larger proof size of zkSTARKs leads to longer verification times compared to ZKSNARKs. This also translates into increased onchain data storage requirements, potentially resulting in higher costs, commonly referred to as "gas" in blockchain environments. It’s important to note that the proof size doesn't directly correlate with the time it takes to generate the proof. The nature and complexity of the computation also play a crucial role.
Scalability TradeOff: While zkSTARKs may have larger proofs and potentially longer verification times, they offer better scalability compared to ZKSNARKs. Both the time taken for proof generation and verification in zkSTARKs increase at a slower rate than in ZKSNARKs as the complexity of the computations escalates. This scalability is a crucial advantage, particularly in applications where computational requirements are expected to grow.
That being said,
ZKSTARKs haven't been around for long, and we are still in the early stages of developing applicationbased zeroknowledge proofs. Most of these are in their initial technological stress phases, with various protocols currently being developed and implemented in the blockchain ecosystem by multiple agencies, such as StarkWare  Check out their products! That said, there's still much work to be done, and there is ongoing debate and divergence of opinions within development communities, leading to some level of politicization.
The fact that the protocol's processing effects depends little on the input provided offers asymptotically optimal efficiency (with polylogarithmic proof size and verifier time, and quasilinear prover time), and includes features resistant to quantum computing, makes it an ideal candidate as a futureproofing technology. This has sparked an explosion of development in the crypto space.
That's it for now!
We hope that this has clarified the topic to some extent and revealed that these protocol scenarios are not overly intricate. If you wish to delve deeper into the finer details of the ZKSTARK protocol, please refer to the links provided below. Alternatively, you can explore additional articles on blockchain scaling solutions in our blog post.
Thank you for reading
Further Readings
starkware.co aszepieniec.github.io hackmd.io vitalik.ca papers.ssrn.com eccc.weizmann.ac.il github.com/starkwarelibs/pdf www.crowdcast.io github.com/starkwarelibs/ethSTARK