# Anonymous Point Collection – Improved Models and Security Definitions

Anonymous Point Collection – Improved Models and Security Defi nitions

Matthias Heinrich Nagel

Matthias Heinrich Nagel

Anonymous Point Collection – Improved Models and Security Definitions

## Anonymous Point Collection – Improved Models and Security Definitions

by Matthias Heinrich Nagel

Karlsruher Institut für Technologie Institut für Theoretische Informatik

Anonymous Point Collection – Improved Models and Security Definitions

Zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften von der KIT-Fakultät für Informatik des Karlsruher Instituts für Technologie (KIT) genehmigte Dissertation

von Matthias Heinrich Nagel

Tag der mündlichen Prüfung: 29. Januar 2020


**Impressum**

Karlsruher Institut für Technologie (KIT) KIT Scientific Publishing Straße am Forum 2 D-76131 Karlsruhe

KIT Scientific Publishing is a registered trademark of Karlsruhe Institute of Technology. Reprint using the book cover is not allowed.

www.ksp.kit.edu

*This document – excluding the cover, pictures and graphs – is licensed under a Creative Commons Attribution-Share Alike 4.0 International License (CC BY-SA 4.0): https://creativecommons.org/licenses/by-sa/4.0/deed.en*

*The cover page is licensed under a Creative Commons Attribution-No Derivatives 4.0 International License (CC BY-ND 4.0): https://creativecommons.org/licenses/by-nd/4.0/deed.en*

Print on Demand 2020 – Gedruckt auf FSC-zertifiziertem Papier

ISBN 978-3-7315-1023-9 DOI 10.5445/KSP/1000117751

# **Anonymous Point Collection—Improved Models and Security Definitions**

Zur Erlangung des akademischen Grades eines

Doktors der Naturwissenschaften

von der KIT-Fakultät für Informatik des Karlsruher Instituts für Techologie (KIT)

genehmigte

Dissertation

von

Matthias Heinrich Nagel

Tag der mündlichen Prüfung: 29. Januar 2020

1. Referent: Prof. Dr. Jörn Müller-Quade 2. Referent: Prof. Dr. Ralf Reussner

## **Acknowledgments**

First and foremost, I would like to thank my advisor Jörn Müller-Quade for granting me the opportunity to write this thesis, for his frankness and his kindness. He not only shared his love-hate relationship with Universal Composability with me, but inspired me that provable security matters. Moreover, I am grateful for a lot of good moments we had. I wish to thank Ralf Reussner for taking interest into my work and accepting to be my co-referee.

This thesis would not have been possible without Andy Rupp. He had the initial idea for the topic and also co-authored two publications. I have to thank for all the knowledge and insights on recent cryptographic building blocks for practical and efficient protocols he readily passed to me.

I would like to thank Brandon Broadnax with whom I had the pleasure to co-author another publication. He taught me a lot of technical tricks and was a reliable oracle for any kind of UC-related questions.

I had the pleasure of working with a lot of wonderful colleagues. Out of many, three stand out in particular. Dirk Achenbach was an excellent mentor during my first year at the institute. Jeremias Mechler was an awesome partner in many projects which happen to arise at an institute beyond crypto. And last but not least, I want to thank my roommate Rebecca Schwerdt for her support, for her friendship, for backup in times of hardship, for fruitful discussions on cryptography as well as a lot of enjoyable non-crypto conversations, for being co-author and for a myriad amount of tea.

# **Contents**





## **Abstract**

In numerous user-centric, cyber-physical systems, point collection and redemption mechanisms are a core component. Loosely speaking, this component or building block may be viewed as personal "piggy bank" that allows users to deposit and disburse points. Depending on the context, points might be interpreted in numerous ways: monetary units (e.g. Euro cents), loyalty rating points, reliability credits, etc. This thesis deals with the problem of *anonymous* point collection.

Applications which are currently deployed in practice typically bind the stored value to some ID (e.g. the serial number of a card) or even worse to a user account. In other words, existing systems do not provide anonymity for the participating users, are at best only pseudonymous and allow to link transactions that belong to the same user. This enables tracing a user's movements. In the literature, several privacy-preserving solutions have been proposed which target specific scenarios: inter alia (anonymous) e-cash, anonymous reputation systems, loyalty systems as well as incentive systems. None of these consider anonymous point collection as a generic, multi-purpose building block. While the latter does not need to be a disadvantage per se, the proposed solutions are typically very restricted (e.g. only look at the specific aspect of point deposition), or completely ignore important features (e.g. blacklisting) which might be required for practical deployment. Moreover, a majority of them lack formal security models, not to mention security proofs and rely on the hope that some vague notion of security and/or privacy is satisfied.

This thesis aims at two goals.

First and foremost, this thesis is a comprehensive, formal treatment of anonymous point collection as a generic building block together with a rigorous security model and proof. To this end, a definition of anonymous point collection is carved out which does not only provide a strong notion of security and privacy, but also covers features which are essential for practical use. Thereby, the proposed definition broadens the applicability of such a building block in real-world scenarios. As a pure definition is hollow, if it cannot be fulfilled, this thesis includes a practically efficient realization which also has been implemented on real-world hardware. This realization is rigorously proven to be secure with respect to the proposed definition.

Despite the formal methodology, the prospect of a practical efficient realization is already reflected by the definition of the envisioned building block. Cryptography has shown that—in principle—any computable function whose inputs might be distributed across mutual distrustful parties can be securely evaluated using so-called secure multi-party computation (MPC). However, generic MPC techniques are too inefficient for real-world applications and also come with a number of other drawbacks. Hence, research on the intersection between IT security and cryptography considers tailor-made building blocks which allow both a practically efficient realization but are also provably secure with respect to a precise definition. Therefore, the main challenge is to find a definition of security that is not overly idealized and thus cannot be realized on the one hand, but still captures a meaningful concept of security and is not too weak on the other hand, while allowing for a practically efficient realization at the same time.

The most important contribution of this thesis is to find that definition. Even in disregard of the extended features, the resulting building block is the first one that


The second and much more subtle goal of this thesis is to contribute to the question how security for complex systems should be defined. Besides game-based security definitions, simulation-based security definitions have turned out to be particularly prolific. In modern cryptography, new schemes or improved realization for existing schemes nearly always come together with a precise definition of their security and a corresponding proof. However, the building blocks which are considered in cryptography are traditionally rather simple objects (e.g. encryption schemes, signature schemes, commitment schemes, etc.). At least, they are much simpler than a building block for anonymous point collection which we deem a complex system. Conversely, in the field of IT security, which considers larger systems, strict security proofs in the cryptographic sense are frequently missing. This thesis aims at having a share in closing this gap. To this end, the traditional way to define a list of certain desired properties of the envisioned system is combined with the simulation-based Universal Composability (UC) framework. Evidence is given throughout the thesis that using this combined approach leads to improved security guarantees compared to a plain game-based approach. On the one hand side, this combined approach thus seems to be the right choice and might serve as a blueprint for comparably complex systems. On the other hand side, it also becomes obvious that a UC-based definition happens to be viable but yields cumbersome and bulky proofs. The thesis broaches the question, if this effect has a more fundamental cause which might be the starting point of further research.

# **Zusammenfassung**

Das Sammeln und Einlösen von Punkten stellt in unzähligen nutzerzentrierten, cyber-physikalischen Systemen eine zentrale Komponente dar. Salopp ausgedrückt kann diese Komponente oder dieser Baustein als ein persönliches "Sparschwein" betrachtet werden, welches dem Nutzer ein Ein- und Auszahlen von Punkten ermöglicht. Je nach Anwendungsfall ergeben sich verschiedene Interpretationen der Punkte: als Geldeinheiten (z.B. Eurocent), Loyalitätsbonuspunkte, Zuverlässigkeitsbewertung, etc. Die vorliegende Arbeit beschäftigt sich mit dem Problem des *anonymen* Punktesammelns.

Derzeit in der Praxis eingesetzte Anwendungen verknüpfen den gespeicherten Punktestand typischerweise mit einer ID (z.B. der Seriennummer einer Karte) oder sogar direkt mit einem Nutzerkonto. Damit bieten existierende Systeme dem teilnehmenden Nutzer keinerlei Anonymität, sondern im besten Fall lediglich Pseudonymität und erlauben, einzelne Transaktionen, die zum selben Nutzer gehören, miteinander zu verketten. Dies ermöglicht, ein Bewegungsprofil des Nutzers zu erstellen. In der kryptographischen Literatur sind verschiedene, privatsphäreschützende Lösungen für spezifische Szenarien vorgeschlagen worden: unter anderem (anonymes) E-Cash, anonyme Reputationssysteme, Loyalitätssysteme und Anreizsysteme. Keine der vorgeschlagenen Lösungen betrachtet anonymes Punktesammeln als einen generischen Mehrzweckbaustein. Auch wenn Letzteres nicht per se nachteilig ist, sind die Lösungen häufig sehr beschränkt (bspw. wird nur der spezifische Aspekt der Punkteinzahlung betrachtet) und wichtige Zusatzfunktionen (bspw. das gezielte Ausschließen von Nutzern), die jedoch für die praktische Verwendbarkeit notwendig sind, bleiben unberücksichtigt. Darüber hinaus lässt eine Mehrheit der Vorschläge formale Sicherheitsmodelle, geschweige denn Sicherheitsbeweise, vermissen. Sie basieren vielmehr auf der Hoffnung, dass ein nur vage spezifizierter Sicherheits-/ Privatsphärebegriff irgendwie erfüllt sei.

Diese Arbeit verfolgt zwei Ziele.

In erster Linie ist diese Arbeit eine umfassende, formale Betrachtung des anonymen Punktesammelns als generischer Baustein inkl. eines präzisen Sicherheitsmodells und -beweises. Zu diesem Zweck wird eine Definition für anonymes Punktesammeln herausgearbeitet, die nicht nur einen starken Sicherheits- und Privatsphärebegriff bietet, sondern auch praktisch relevante Leistungsmerkmale abdeckt. Damit erweitert die vorgeschlagene Definition die Anwendbarkeit eines solchen Bausteins in realen Szenarien. Da eine reine Definition, welche nicht erfüllt

werden kann, substanzlos ist, beinhaltet diese Arbeit auch eine praktisch effiziente Realisierung, die auf realer Hardware implementiert wurde. Diese Realisierung wird als sicher bzgl. der vorgeschlagenen Definition bewiesen.

Trotz der formalen Methodik zeigt sich das Ziel einer praktisch effizienten Realisierung bereits in der Definition. Die Kryptographie hat gezeigt, dass es prinzipiell möglich ist, jede beliebige, berechenbare Funktion, deren Eingabe über verschiedene, sich paarweise misstrauende Parteien verteilt sein kann, mit Hilfe sog. sichererer Mehrparteienberechnung (MPC) auszuwerten. Generische MPC-Techniken sind jedoch zu ineffizient für reale Systeme. Die Forschung an der Schnittstelle zwischen IT-Sicherheit und Kryptographie betrachtet daher maßgeschneiderte Bausteine, die sowohl eine praktisch effiziente Umsetzung erlauben, als auch beweisbar sicher bezüglich einer präzisen Definition sind. Die wesentliche Herausforderung ist somit, eine Definition zu finden, die auf der einen Seite nicht überidealisiert und somit unerfüllbar ist, aber auf der anderen Seite dennoch einen sinnvollen Sicherheitsbegriff bietet, der zudem auch eine praktisch effiziente Realisierung ermöglicht.

Der wichtigste Beitrag dieser Arbeit ist, eine solche Definition zu finden. Auch ohne Berücksichtigung der zusätzlichen Leistungsmerkmale ist der resultierende Baustein der erste seiner Art, der gleichzeitig


Das zweite und deutlich subtilere Ziel dieser Arbeit leistet einen Beitrag zu der Frage, wie Sicherheit für komplexe Systeme definiert werden sollte. Neben spielebasierten Sicherheitsdefinitionen haben sich simulationsbasierte Sicherheitsdefinitionen als besonders fruchtbar herausgestellt. In der modernen Kryptographie werden neue Verfahren oder verbesserte Realisierungen vorhandener Verfahren nahezu immer zusammen mit einer präzisen Definition ihrer Sicherheit und einem zugehörigen Beweis vorgeschlagen. Allerdings sind die von der Kryptographie betrachteten Bausteine traditionell eher einfache Objekte (z.B. ein Verschlüsselungsverfahren, ein Signaturverfahren, ein Commitment-Verfahren, etc.), zumindest deutlich einfacher als ein Baustein für anonymes Punktesammeln, welcher im Folgenden als komplexes System betrachtet wird. Im Gegenzug fehlen im Bereich der IT-Sicherheit, welche größere Systeme betrachtet, häufig strenge Sicherheitsbeweise im Sinne der kryptographischen Methodik. Diese Arbeit möchte einen Beitrag dazu leisten, diese Lücke zu schließen. Zu diesem Zweck wird der traditionelle Ansatz, eine Liste wünschenswerter Eigenschaften des anvisierten

Systems zu definieren, mit dem simulationsbasierten UC-Framework kombiniert. Anhand verschiedener Aspekte des Systems wird diskutiert, wie dieser kombinierte Ansatz gegenüber einer rein-spielebasierten Modellierung zu verbesserten Sicherheitsgarantien führt. Einerseits scheint dieser Ansatz somit grundsätzlich der richtige Weg zu sein und könnte als Blaupause für vergleichbar-komplexe Systeme dienen. Andererseits wird jedoch auch offensichtlich, dass eine UC-basierte Definition zwar grundsätzlich ein gangbarer Weg ist, jedoch zu sperrigen, schlecht-handhabbaren Beweisen führt. Die Frage, ob diesem Effekt ein grundsätzlicheres Problem zugrunde liegt, wird andiskutiert und könnte Ausgangspunkt für weitere Forschung sein.

## **1 Introduction**

This thesis proposes a flexible security model and cryptographic protocol framework designed for a new cryptographic building block that enables anonymous point collection. To the best of the author's knowledge, this work is the first with a rigorous security definition and proof which capture all aspects of such a building block in an integrated model. Furthermore, it is unarguably the most comprehensive formal treatment of anonymous point collection overall. The framework is very flexible in the sense that auxiliary features (e.g. blacklisting of users, selective unveil of individual transactions) which might be required in certain applications are included in the definition, but each can just as well be omitted individually, without changing any other part of the system or sacrificing security.

The need for such a building block is obvious from the multitude possible applications. The abstract notion of points can be interpreted as monetary units (e.g. Euro cents), loyalty rating points, reliability credits, etc.

In the scope of loyalty programs prominent examples are the German Payback system [PAY16] or the UK-based Nectar program [Aim16].

Complementary currencies are commonly used by providers of physical services to restrict access on a pre-payment basis. Typical examples are trading cards for regional public transportation systems, like the Oyster Card for the London underground railway [Tra19], access cards for natatoriums, or campus payment systems for canteens and alike [ven19; Cou19]. Here, customers first top-up their wallet (typically in form of an ID-1 card) which is then charged per usage. But also, post-payment variants exist, in which the users collect debt first and clear it later. In the scope of cashless payment systems, electronic toll collection (ETC) is of particular interest. ETC is already deployed in many countries all over the world with an estimated annual turnover of 10.6 billion US dollars by 2022 [Mar17]. The EU plans to introduce the first implementation of a fully interoperable tolling system (EETS) by 2027 [EC17].

Reliability (or reputation) assessment is used in various interactive systems to curtail riot behavior. When entering the system, new users only have a limited choice of options how to interact with the system, but long-term users can gain access to more advanced options (and more harmful options if misused) depending on their reputation level. Basic examples are Internet forums where new users can post new messages and edit their own ones, but must not edit or even delete other posts until they have demonstrated responsible behavior. In Vehicleto-Grid scenarios [KT05], owners of e-vehicles are not only paid for the buffer capacity which the batteries of their e-vehicle provide to the grid when cars are parked at the mall, office, etc., but the grid operator also needs to rate the soundness of users' declarations how long their cars will be connected to the grid in order to predict their availability and to create precise forecasts for the grid management. This requires means to rate the reliability of (anonymous) individuals in order to spot owners of e-vehicles which frequently leave before the stated time.

Lastly, the same mechanism is used to incentify particular behavior. For instance, in envisioned mobile sensing scenarios [Chr+11], users should be encouraged to collect environmental or health data measured with their smart devices and provide this data (enhanced by location and time information) to some operator. In exchange, users receive micropayments they can use to pay for services based on the collected data.

Unfortunately, the systems in use today do not protect the privacy of their users and are identifying or only pseudonymous at best. The latter still allows transactions to be traced and linked to the same user. Surely, in some scenarios (e.g. loyalty programs), identification and traceability of users is part of the business interest in order to enable other business relevant purposes like, e.g., personalized advertising. But in many scenarios personal information is only a by-product which is (falsely) deemed unavoidable to manage individual wallets or accounts. For example, in the cases of campus cashless payment systems, the ETC scenarios or access cards, the operator of such a system, e.g. the caterer of the canteen or the operator of the natatorium, is interested into what has been purchased how often, but is usually not interested (or should not be interested) who has done the purchase. Having the personal information nonetheless encourages abuse or accidental theft of data. Thus, an efficient and cost-effective privacy-preserving mechanism which avoids data collection in the first place, but still enables the billing functionality, should be of interest to the providers as well. In this way, there is no need to deploy costly technical and organizational measures to protect a large amount of sensitive data and there is no risk of a data breach resulting in costly law suits, fines, and loss of customer trust. This is especially interesting in view of the new EU General Data Protection Regulation (GDPR) [EC18] which is in effect since May 2018. The GDPR stipulates comprehensive protection measures and heavy fines in case of non-compliance.

## **1.1 Related Work**

When considering the related work of our anonymous point collection scheme, the collection of work to be looked at heavily depends on what perspective on the anonymous point collection scheme is chosen. The more application-oriented way is to directly look at the anonymous point collection system as some kind of anonymous e-cash system or at a concrete scenario which involves a specific kind of points, e.g. in the scope of a customer loyalty program or

an anonymous reputation program. However, we would like to focus more on the aspects of anonymous point collection as a generic, multi-purpose building block. From this point of view our anonymous point collection scheme can be used to build the aforementioned systems but is not exclusively limited to these applications.

#### **1.1.1 Application-specific Proposals**

Except for [JR16; Mil+15] which are discussed in Section 1.1.3 the cryptographic literature has not considered anonymous point collection as a generic building block before. In the following, a brief overview of rather application-centric work is given. Among the proposed solutions for scenarios comparable to our goal the literature on (anonymous) e-cash and payment systems [CHL05; Bal+15; Rup+15; AIR01; Bel+08; ILV11; Gar+08; KHG08; Gar+09], anonymous reputation systems [AK12; AKS12], anonymous loyalty systems [EFS04; BD15] and anonymous incentive systems [Mil+15; Gon+15] is most notable. For the specific task of privacy-preserving, electronic toll collection (ETC) many ideas have been proposed as well [JCV15; Jar+14; Jar+16; Day+11; Bar+16; PBB09; Bal+10; Mei+11].

At first sight, the problem of anonymous point collection might appear easily solvable using (offline) e-cash, if each point is assumed to correspond to a single e-coin. In order to collect points, the users and the operator may execute the protocol to withdraw one e-coin repeatedly for each point. All collected coins may later be redeemed using the protocol for spending coins (multiple times). However, besides being inefficient, because coins typically cannot be aggregated, this also violates user privacy as in traditional offline e-cash, e.g. [CHL05], the withdrawal of e-coins is identifying. This is an unavoidable necessity, because a user's identity needs to be encoded into an e-coin during withdrawal to enable double-spending detection. Even transferable e-cash, e.g. [Bal+15], does not achieve the anonymity goal of this work. In such a scheme, the ownership of an e-coin can be transferred anonymously and unlinkably between users multiple times without the help of the bank. However, an impossibility result by Canard and Gouget [CG08] implies that an adversary impersonating the operator and the point-of-sale would be able to link a user's transactions. Moreover, transferable e-cash allows users to transfer e-coins arbitrarily among each other, a property which is undesirable in some of our envisioned scenarios as users would be able to pool their points.

A loyalty system is meant to reward users (most often a customer) for being steady partners of some other party (usually a merchant). In the easiest case users collect points for their participation in some action and later redeem those points. Enzmann, Fischlin, and Schneider II [EFS04] introduce two privacy-friendly loyalty systems for electronic market places. Their counter-based solution builds on RSA blind signatures. Blanco-Justicia and Domingo-Ferrer [BD15] propose a loyalty system which uses partially-blind signatures in pairing-based groups to ensure anonymity. Milutinovic et al. [Mil+15] present an unlinkable multi-purpose incentive scheme. The scheme draws from zero-knowledge proofs, commitment schemes, and partially blind signatures. Recently, Gong et al. [Gon+15] propose a privacy-preserving incentive-based demand and response scheme building on identity-based signatures, partially blind signatures as well as proofs of knowledge.

A reputation system provides the means to rate the behavior of parties in a system in order to support other parties in deciding whom to trust. In the simplest case, reputation equals the sum of (positive and negative) individual rating values. Systems providing rater and ratee privacy in peer-to-peer systems can be built from e-cash and anonymous credentials as, e.g., shown by [And+08]. Building on blacklistable anonymous credentials [Tsa+07], several papers (e.g., [AK12; AKS12]) have been published, dealing with TTP-free reputation-based blacklisting, where a central authority (like Wikipedia) may score the actions of its anonymous users.

Unfortunately, a lot of the aforementioned work frequently lacks a formal security model, a precise statement of what security goal is archived and/or rigorous proofs of security. Moreover, some of the papers were written with a particular use case in mind and implicitly assume that parties exhibit a particular behavior, i.e. they do not consider maliciously acting adversaries in the cryptographic sense but "rational" adversaries. Yet another set of applications which benefit from stronger security, offline capabilities, and negative points, are pre- or post-payment systems. In practice, such payment systems are typically implemented using simple RFIDtransponder or smartcard-based solutions like the MiFARE Classic [NXP14], which essentially offers no security and privacy at all [Gar+08; KHG08; Gar+09, and more], or the MiFARE DESFire [NXP16; OP11] also allowing to link all transactions. Therefore, we prefer to look at anonymous point collection as an application-independent building block with a proper formalization of its functionality, its security and its privacy properties. For details how to use anonymous point collection to build some of the applications see Section 2.3. For a discussion of the long list of proposals for the electronic toll collection (ETC) scenario [JCV15; Jar+14; Jar+16; Day+11; Bar+16; PBB09; Bal+10; Mei+11; DDS12; Che+13] the reader is referred to Nagel et al. [Nag+20], which has been co-authored by the author of this thesis. There, the authors demonstrate how the proposed anonymous point collection scheme can be used to construct an ETC system. In summary, it seems fair to say that previous solutions for that domain have mostly been proposed by practitioners and also lack—apart from a few exceptions [DDS12; Che+13; Bal+10]—any formal security analysis.

#### **1.1.2 Proposals with Similar Constructions**

Our proposed anonymous point collection scheme shares some resemblance with the notion of a priced oblivious transfer (POT). POT was introduced by Aiello, Ishai, and Reingold [AIR01] as

a tool to allow a customer to purchase digital goods from a merchant without leaking the "what, when and how much". However, privacy of the customer (or user in our scenario) is not granted and the original POT scheme is inherently limited to a single point-of-sale. A POT is a two-party protocol between the customer and the merchant. The merchant owns a set of messages and tags each of the messages with a price. The customer is allowed to receive a subset of the messages such that their total price does not exceed a specific limit. A POT scheme guarantees that the customer does not learn anything about the messages which have not been picked and that the merchant does not learn anything at all. The envisioned scenario is the purchase of a set of cryptographic keys which in a later step allow the customer to access a DRM-protected digital good (i.e. software licenses, video-on-demand, etc.). Camenisch, Dubovitskaya, and Neven [CDN10] extend POTs by anonymity of the customer and unlinkability of individual transactions which brings it closer to our scheme. The protocol is based on two different signature schemes [ASM06; BB04], the set membership protocol from [CCs08] and zeroknowledge proofs. Nonetheless, the scheme is still limited to a single merchant (but with multiple users). Moreover, [CDN10] lacks a full rigorous formal treatment. Rial and Preneel [RP10] extend POTs by optimistic fairness such that both parties can appeal to a third party in case of a dispute.

In some other aspects our anonymous point collection scheme exhibits similarities to Psignatures [Bel+08; ILV11] which have been introduced by Belenkiy et al. [Bel+08] as a tool to construct anonymous credentials. The scheme involves a set of users and an issuer. The scheme combines the algorithms of a commitment scheme and a signature scheme and extends them by two algorithms that allow the user to prove that (1) two commitments contain the same message, and (2) the user knows a valid signature under the issuer's private key on a message inside the commitment, resp. The scheme in [Bel+08] builds on weak Boneh-Boyen signatures [BB04], Groth-Sahai commitments and Groth-Sahai NIZK proofs [GS08]. Although their construction shares many ideas with ours, there are at least two major differences: (1) Our building block allows to homomorphically modify the commitments and obtain new signatures which is essential for "depositing points" while [Bel+08] only allows to re-randomize commitments. (2) P-signatures do not include a mechanism to prevent users from showing different commitments to the same message twice. Skipping ahead, such a mechanism is required for double-spending detection (see later) but is not required for standard anonymous credentials.

#### **1.1.3 Generic Proposals—uCentive and BBA**

Besides [JR16], only [Mil+15] appears to consider a point collection mechanism as a multipurpose building block on its own. However, the proposed protocol—called uCentive—targets

a simpler scenario than we do: incentives are not accumulatable on the user's side but stored and redeemed individually, negative points are not supported, and double-spending detection is done online rather than offline. uCentive also differ regarding the use of cryptographic building blocks: uCentive makes use of anonymous credentials and partially blind signatures. Unfortunately, the security and privacy properties of their protocol are again only informally stated and no proofs are given.

Jager and Rupp [JR16] have recently introduced BBA (Black-Box Accumulator) as a generic building-block for a curtailed variant of anonymous point collection. They formalized the core functional, security, and privacy requirements for a variety of user-centric protocols such as loyalty, refund, and incentive systems. As their work is the starting point for [Nag+17; Nag+20], which in turn are the base of this thesis, [JR16] is discussed in more detail. Differences between [JR16] and this work are also detailed out in Section 1.2.

BBA consists of a set of non-interactive algorithms to generate, manipulate, and show statements about BBA tokens (aka piggy banks or wallets). The scheme stipulates four types of parties: a set of users, an issuer, a set of accumulators and a verifier. However, as the issuer, all accumulators and the verifier share the same public-private key pair and must completely trust each other, they are better regarded as a single party. BBA allows a user to collect *positive* points (representing incentives) in an anonymous and unlinkable fashion. In the beginning, a user receives a BBA token generated by the issuer which is bound to a unique serial number known to both parties. This serial number remains constant during the lifetime of a token.¹ All points are collected using this single, constant-size accumulation token. To this end, a user blinds and unblinds the token before and after every transaction with an accumulator. When redeeming the token, the sum of all collected points as well as the *initial* serial number is revealed to the verifier. Obviously, obtaining and redeeming a BBA token is a linkable operation as the serial number of the token is revealed in both operations. After a token has been redeemed, it must not be used again. A permanent connection to a database containing serial numbers of tokens already redeemed is required in order to prevent double-redemption (aka double-spending) of tokens. Hence, BBA schemes are online systems. Also, users can re-use an old state of their token when points are accumulated without being detected. For this reason, "negative" points are not supported as users could easily get rid of them. Jager and Rupp [JR16] assume that positive points (i.e. incentives) are beneficial for users and thus a "rational" adversary has no interest in re-using an old state of a token.

¹ We stress that a serial number in [JR16] must not be confused with a serial number in this thesis. The serial numbers of [JR16] are better compared to wallet ID in this work. Both remain constant during the lifetime of a token or wallet, resp., and thus are identifying for the token/wallet. Serial numbers in the sense of this work denote single transactions, i.e. the deposition or disbursement of points, and do not exist in [JR16].

Moreover, the authors formalize a rather weak form of security, by only demanding that a collusion of malicious users may not be able to redeem more points than the total amount of points issued to them. In particular, this does not rule out that users may transfer points arbitrarily between their BBA tokens (without help). Normally, non-interactive schemes are a highly desired goal in cryptography, because non-interactive schemes have little communication complexity and are therefore typically very efficient. But the (non-interactive) algorithms of [JR16] are rather artificial. For example, the activity of adding points to a BBA token is not a single protocol, but willfully split into three algorithms which are (forcibly) non-interactive. First, users locally mask their tokens (to enable anonymity) using the first algorithm, then the blinded token is handed over to the accumulator outside the scope of the model, the accumulator locally manipulates the token using the second algorithm, the new token is given back to the users outside the scope of the model, and finally users locally unmask their tokens again using the third algorithm. This approach yields non-interactive algorithms at the cost of a direct semantic interpretation of these algorithms and it is not immediately clear what kind of security is achieved.

To summarize, the original BBA framework suffers from a number of serious restrictions including:


These shortcomings limit the applicability of BBA as a building block. For instance, operators of loyalty or reputation systems do not want their users to pool or trade their points. Also, in certain scenarios negative points might be required. To realize this feature with a BBA scheme, one would need to redeem all points on a token, create a new one, and charge it with the remaining (unspent) points. However, in this way all partial redemptions of a user are linkable. Finally, BBA does not provide any of the auxiliary features (like blacklisting, etc.) which are required for practical use.

In [Nag+17], Nagel et al. propose BBA+ (Black-Box Accumulator Plus) which rectifies many of the drawbacks of the original BBA scheme [JR16]. But [Nag+17] still misses an integrated security model which captures both security for the operator and privacy for the users in a unified model. This and other issues are addressed by Nagel et al. in [Nag+20]. Both are discussed in the following section.

## **1.2 Contribution**

This thesis is mostly based on [Nag+17; Nag+20], but also goes beyond those. Before the combined contribution of [Nag+17; Nag+20] and this work over previous proposals will be described, we shortly sketch their evolution.

BBA+ [Nag+17] improved over BBA [JR16] by offline capabilities, the support for negative points, the prevention against the pooling of points between users and a double-spending mechanism. Also, the definitional part has mostly been rewritten. BBA+ considers interactive protocols which leads to more intuitive definitions and broadens the class of possible instantiations. For example, the definition of a BBA+ scheme imposes less restrictions on an instantiation, as the definition only stipulates a single protocol for the deposition of points instead of three non-interactive algorithms. This also allows to formalize the security properties of BBA+ in a more natural as well as stronger way. However, in [Nag+17] security (incl. correctness) for the operator on the one hand and privacy for the users are still considered to be distinct aspects. Security is still defined by a list of properties that are individually proven in a game-based approach which has been inspired by the definition for a pre-payment with refunds scheme proposed in [Rup+15]. Privacy for users is defined by a simulation-based approach in [Nag+17].

In [Nag+20], Nagel et al. use BBA+ to construct a privacy-preserving ETC system. This enhances BBA+ by additional features that are essential for such a scenario. With respect to practical deployments the lack of these properties is a significant shortcoming of BBA+ and the enhancements are also beneficial for other applications of anonymous point collection. Also, Nagel et al. [Nag+20] uses the UC-framework to define all security aspects incl. privacy as an ideal functionality.

In this thesis, the functional extensions of [Nag+20] are backported and presented as part of a generic building block for anonymous point collection. Moreover, many subtle details in [Nag+20] are not formally spelled out, but are only sketched. This mostly concerns the channel model (i.e. in which cases communication is anonymous or authenticated and—if authenticated at which point during the execution the identities are learned) and the synchronization of the distributed state between the different parties. In this thesis these seemingly minor details are straightened out as well. This has not only been a pure formality for the sake of completeness, but unveiled several oversights in [Nag+20]. Fixing these flaws necessitated modifications on the protocol level but also adjustments to the security model.

#### **1.2.1 System Definition, Security Model and Proof**

This thesis presents a framework for anonymous point collection which addresses the restrictions of BBA discussed in Section 1.1.3, thereby significantly strengthening its security and

broadening its applicability. For the scenario which is detailed out in Chapter 2, we propose an ideal functionality Fapc for anonymous point collection based on the UC framework [Can01]. Typically, the standard approach is to cast a complex system like Fapc as an MPC problem and then resort to *generic but inefficient* UC-secure MPC [IPS08; Can+02]. Our work is one of very few combining a complex, yet practical crypto system with a thorough UC security analysis.

Our framework improves over previous work in the following aspects:

	- (a) The security of BBA [JR16] (and even BBA+ [Nag+17]) has been modeled by formalizing each security property individually as it is usually done in a game-based setting. This approach bears the intrinsic risk that important and expedient security aspects are overlooked, e.g., the list is incomplete. This danger is eliminated by the UC-approach where we do not aim to formalize a list of individual properties but rather how an ideal system should look like.
	- (b) The definition Fapc allows interactive protocols, and thus poses less restrictions on possible realizations.
	- (c) A single ideal functionality Fapc which encompasses a complete sequence of transactions allows for a very "strong" variant of security and privacy with an intuitive, semantic interpretation.

case a user commits double-spending (for those realization that allow double-spending with subsequent identification). The set of unlinkable transactions not only includes the deposition but also disbursement of points.

	- (a) The definition demands the existence of several auxiliary features which deal with potential "real-world" issues like broken hardware, legal disputes and so on: (i) A blacklisting mechanism that allows to exclude individual wallets or all wallets of a user from the network. (ii) A recalculation mechanism that allows to recalculate or restore the "true" balance of a wallet in case of a dissent, in case of double-spending or broken hardware. (iii) A prove-participation mechanism that allows users to selectively unveil a single transaction and thereby prove their participation without compromising the unlinkability of other transactions (including their own). Note that some of these features either premise the consent of the user (otherwise the operator could break unlinkability on its own) or a third party which serves as a dispute resolver and enables a key-escrow mechanism.
	- (b) To enforce the collection of negative points which users may not voluntarily collect, the system optionally includes a party called violation enforcer to re-establish fairness.
	- (c) Lastly, as a minor detail, Fapc allows user and PoS attributes on which the price of a transaction may depend. Also, the attribute can be used to bind wallets to a billing period which is encoded in the attribute. Although this extension seems trivial it has a great impact in practice. It does not only allow more complex pricing models but additionally increases real-world performance. By making PoSes only accept wallets from the current period, the size of the blacklist checked by the PoS can be limited to enable fast transactions. Similarly, the database needed to recalculate balances can be kept small.

A major challenge in designing Fapc was to combine provable security and practicality, where the latter includes practical performance figures. Hence, the difficulty was to find a definition that yields a reasonable trade-off between various aspects: On the one hand, it needs to be sufficiently abstract to represent the semantics of anonymous point collection which allows the formalization strong security features. If Fapc was aligned more closely to a concrete realization, this would artificially restrict the set of admissible realization, introduce unnatural artifacts and thereby weaken the provided security guarantees. On the other hand, while in principle every computationally solvable problem can be cast as an MPC problem, a definition that is completely agnostic of a potential realization would certainly lead to very inefficient solutions.

As stated above, the proposed definition captures the problem of anonymous point collection within a single ideal functionality Fapc with polynomial many parties. This makes the security analysis and proof highly non-trivial and cumbersome, because a high number of combinations which parties are corrupted needs to be considered in the proof. At first sight, it seems tempting to follow a different approach: In many tasks² of the system only specific parties interact with each other while the majority of parties is not involved. For example, a single user and a single PoS interact with each other in order to deposit points to the user's wallet. In a similar spirit, most tasks only involve two parties. This observation makes it temping to de-compose the system into a set of two-party tasks, define an ideal functionality for each of these tasks, realize each of them by a protocol, analyze their security separately and deduce the security of the system using the UC composition theorem. However, this entails a slew of technical subtleties due to the shared state between the individual two-party protocols which cannot easily be solved.

Moreover, although our system uses cryptographic building blocks for which UC formalizations exist (commitments, signatures, NIZK), these abstractions cannot be used. For example, UC-commitments are non-transferable, i.e., the commitment message cannot be passed to a different party, but we exploit this property heavily. Abstract UC-signatures are just random strings that are information-theoretic independent of the message they sign. Thus, it is impossible to prove in zero-knowledge any statement about message-signature-pairs. Hence, our security proof has to start almost from scratch. Although parts of it are inspired by proofs from the literature, it is very complex and technically demanding.

#### **1.2.2 Protocols and Implementation**

Besides the definitional framework, this thesis includes a realization P5C (Provably-Secure yet Practical Privacy-Preserving Point Collection) of Fapc using advanced cryptographic building

² The term "task" has no formal definition. However, we assume that the reader has some intuitive understanding of it. For an informal definition see Definition 2.1.

blocks and presents promising results of a prototypical implementation on real-world hardware. They show that the proposed realization may already be useable in practice, allowing to run transactions within a second.

At a high level, the construction is fairly intuitive and draws from techniques also commonly used in any privacy-preserving protocols including e-cash, P-signatures, and anonymous credentials. However, there are technical differences to these concepts as explained in Section 1.1.1. Moreover, a major challenge was to twist and combine all these techniques to achieve simulation-based security, practicality and efficiency *at the same time*. The concrete selection of the right instantiation of building blocks and the fine-tuning how they interplay has to be credited to the co-authors of [Nag+17; Nag+20] and not the author of this thesis.

This proposed realization is a semi-generic construction using public-key encryption, homomorphic trapdoor commitments, digital signatures, and Groth-Sahai non-interactive zeroknowledge proofs over bilinear groups for which the SXDH assumption holds. To achieve freshness of tokens, we draw from techniques typically used in offline e-cash systems, namely double-spending tags.

To realize the blacklisting mechanism of users we adopt and adjust an idea from the e-cash literature [CHL05]. On a high level this technique is a key escrow mechanism on a per wallet basis which allows to link all transactions of the affected wallet with the help of a trusted dispute resolver. Skipping ahead, this requires on a technical level to encrypt the seed of a PRF under the key of the dispute resolver. This part is tricky due to the use of Groth-Sahai NIZKs for efficiency reasons and the lack of a compatible (i.e., algebraic) encryption scheme whose message space is in turn compatible with the space of the seed. The author of this thesis contributed to this problem in so far that the author adopted a CCA-secure, structure-preserving encryption scheme [Cam+11] to the SXDH hardness assumption.

Other technical challenges arise from building on the Groth-Sahai (GS) proof system. GSproofs are efficient and secure in the CRS model but require particular care, as they are no proper proofs-of-knowledge for witness components over ℤ and not always zero-knowledge. For example, to prove statements about shrinking multi-commitments over ℤ , which we use to obtain compact tokens and proofs, the employed commitment scheme needs to satisfy a non-standard binding property.

In order to assess the suitability of the proposed realization for real-world applications, several variants have been implemented. The user side of the protocols in [Nag+17] has been implemented on a commercial off-the-shelf (COTS) smartphone, while the PoS side has been implemented on an embedded PC of a turnstile. The proposed protocol in [Nag+20] for the ETC system has specifically been implemented on an embedded processor which is known to be used in currently available on-board-units such as the Savari MobiWAVE [Sav17]. The biggest advantage for real-world deployment originates in the use of non-interactive zero-knowledge

proofs, where major parts of the proofs can be precomputed and verification equations can be batched efficiently. This effectively reduces the computations which have to be performed by the user and the PoS during an actual protocol run. The implementation results show that the most time critical task—the deposition of points at a PoS on a user's wallet—can be executed within 510 ms.

#### **1.2.3 Concomitant Contributions**

Our proposed definition of security and privacy of our building block follows the simulationbased paradigm and especially builds on top of the so-called UC-framework [Can01]. However, common saying occasionally states that the UC framework is incompatible with privacy and does not allow to define privacy-preserving building blocks. We do away with this tale. To this end we first clarify a typical misconception how privacy should be looked at in UC. Second, we introduce a new messaging functionality to get rid of the communication model which uses identity-based messaging—as we call it—and which is hard-coded into the original UC framework without breaking compatibility with the rest of the model.

In this thesis, the security and privacy of our building block are not considered distinct features but captured by a single, uniform definition. With respect to security, the simulationbased paradigm is usually contrasted with the game-based approach. The game-based approach is sometimes considered to be more natural as each security game is typically associated to a single desired objective of the final building block.³ With respect to privacy, several notions like -anonymity [Swe02] or -differential privacy [Dwo06; Dwo09; Dwo10] have been proposed. This thesis suggests an approach which combines all these paradigms. Instead of defining a single game for each desired security objective and then applying the game to a concrete (cryptographic) realization, the ideal functionality is shown to meet the list of objectives. Likewise, the privacy assessment should not be conducted on a concrete realization, but on the ideal functionality. The ideal functionality abstracts away the cryptographic complexity and "pulls it out of the equation". As such, the approach followed in this thesis can serve as a blueprint for the analysis of similar systems.

## **1.3 Organization of the Thesis**

In Chapter 2 the envisioned scenario is detailed out. The involved parties and their major interactions with each other are introduced. As the features of a building block for anonymous

³ One of the (anonymous) reviewer of [Nag+20] declared to feel more confident about the security of the scheme, if there was a list of individual security games instead of a single ideal abstraction, because he/she admitted not to be able to tell what security the simulation-based definition provides.

point collection are dictated by the applications within which the building block is eventually deployed, some of these possible applications are discussed in more detail. The chapter concludes with a list of desired properties one might expect from a building block for anonymous point collection.

As our security model is UC-based, Chapter 3 is an introduction into Universal Composability for those readers who are not familiar with this framework. This chapter does not contain any own contribution, but is included for self-completeness of this thesis. Also, some very common, so-called setup assumptions are provided from the literature. Only, the messaging functionality on which the proposed protocol relies is not a pure reproduction, but a merge of existing functionalities.

Given the scenario from Chapter 2, the definition of the proposed building block for anonymous point collection is presented in Chapter 4. The building block is defined as an ideal functionality within the UC framework. The functionality is highly non-trivial and nearly a protocol on its own as many "real world artifacts" have to be considered.

Due to its complexity, the definition is reviewed in Chapter 5. This chapter argues why the stipulated definition captures the "right definition" of security and privacy for the involved parties. Also, we show that the identified properties from Chapter 2 are indeed met by the definition. We stress, that this is *not* the security proof for the proposed protocol as (1) a definition cannot be proven and (2) the protocol has yet to be defined. Rather, Chapter 4 bridges the more traditional gamed-based approach with the simulation-based paradigm.

Chapter 6 introduces the hardness assumptions, all building blocks (encryption, digital signatures, commitments, and alike) and their usual security definitions (IND-CCA, EUF-CMA, and alike) are defined. Similar to Chapter 3 this chapter contains no own contribution. Reader who are familiar with these building blocks can safely skip this chapter.

In Chapter 7 a realization of the ideal functionality defined in Chapter 4 is proposed. This realization is given in pseudo-code and uses the building blocks from Chapter 6 as black boxes.

Chapter 8 gives the security proof which shows that the proposed protocol from Chapter 7 is actually a realization of the definition from Chapter 4. The proof follows the typical approach to define a sequence of hybrids and is rather bulky due to the complexity of the definition.

Chapter 9 reports figures for a real implementation of the pseudo-code on real-world hardware.

Finally, this thesis concludes with Chapter 10. The thesis is summarized, some of the encountered difficulties revisited and discussed how they might stimulate further research. Also, we sketch some straightforward improvements of this work which are trivial on their own but entail a slew of changes in all parts of this work.

## **2 Considered Scenario**

In a nutshell, anonymous point collection refers to a variety of scenarios in which a set of parties—the users—collect or redeem points inside a personal wallet. The system is managed by an operator which typically is some kind of legal entity (e.g. a company) whose business interest usually is to have as many users as possible using its system.¹ Users interact with the system at points-of-sale (PoSes) which are setup and maintained by the operator.

Our proposed scheme P5C (Provably-Secure yet Practical Privacy-Preserving Point Collection) is highly flexible and can easily be adopted to different applications. The main design parameter is whether the addition and subtraction of points are treated uniformly by the same interactive task or if both interactions are treated separately. Another, but strongly related design parameter is whether wrap-arounds (i.e. a change of sign of the balance of a wallet) and/ or under-/overflows needs to be specially considered or can be ignored. Both design parameters heavily influence which tasks are supported by the system, which parties interact in these tasks and which "level" of security—or more precisely anonymity—is provided. More, but decoupled design decisions relate to the support of optional features like blacklisting.

For the ease of presentation, we preliminary concentrate on a specific embodiment that keeps the addition of points—called *deposition*—separated from the subtraction of points—called *disbursement*. Also, deposition is anonymous and is carried out between a user and a PoS while disbursement is identifying and takes place between a user the operator. We discuss the alternatives in Section 2.3. Also, we shortly sketch what needs to be changed for the protocols to realize these alternative embodiments whenever convenient during the thesis.

## **2.1 Involved Parties**

Our scheme P5C involves the following parties:

• The *Operator* which usually is a legal entity and runs the system. It owns and maintains the PoSes. Also, it manages a database of users who have registered for participation in

¹ Please note, that the users normally do not pay the operator directly in order to participate in the system, but the operator is typically reimbursed by some third party for its service.

the system and who own a legitimately issued wallet. We stress, the operator *does not* manage the wallets.


² Our implementation (cp. Chapter 9) clearly demonstrates that even low-end smartphones have sufficient computational power to run our protocol.

transaction of points has successfully terminated before users gain whatever benefit is traded in exchange, then a violation enforcer is probably not needed. However, if the kind of application is such that users may trick the PoS (or operator) to grant them access to the application-specific benefit before points have been exchanged, then a violation enforcer might be required in order to re-establish equity. See Section 2.3 and in particular Section 2.3.3 for an example.

## **2.2 Main Tasks**

In the following, we sketch the main tasks of the system to foster a better understanding of the life cycle of the system.

The term "task" has no precise definition. Also, it seems very difficult to give a formal definition which captures the accurate meaning in all cases. On a colloquial level, a task could be called a protocol, but this is formally wrong as in the context of the UC-framework (cp. Chapter 3), the term "protocol" denotes to the whole system, i.e. a (UC-)protocol is synonymous to a scheme from the more traditional game-based point of view. In our sense, "task" means "phase" which is another term without a generally applicable, precise definition but commonly used in the literature. For example, a commitment protocol (aka scheme) consists of a commitment phase and an unveil phase, or an encryption scheme³ consists of an encryption phase and a decryption phase. Here, we intentionally coined the term "task" and avoid the word "phase" as the latter suggests a predefined order/number of executions which is something we do not want to stipulate. An informal definition is

**Definition 2.1 (Task (informal))** *Within a cryptographic protocol or scheme, a task is an interaction between a fixed subset of parties, which is bounded, i.e. has a defined starting and termination point, and which can be given some semantic interpretation. The subset of parties can also encompass only one single party.*

For example, the issuance of a wallet is a task. Figure 2.1 provides an overview of the most important tasks. A detailed description that also includes all tasks can be found in Chapter 4.

Remember, that we tentatively concentrate on a specific embodiment that keeps the deposition and disbursement of points separated in two tasks that also exhibit different features.

**Party Registration** In order to participate in the system, all parties (users, PoSes, operator, etc.) must first create a public key and publish it. The public key is used to identify a party in

³ In the context of encryption the term "protocol" instead of scheme is rarely used, as encryption is assumed to be non-interactive.

Figure 2.1: The P5C System Model

the system and is assumed to be bound to the party's (physical) identity such as a passport number, social security number, companies' register number etc. This is done once and makes the party accountable in case they cheat. For the majority of parties, namely users and PoSes, the operator can act as the registration authority. Details are discussed later on.

**Wallet Issuing** The operator issues wallets to users. A wallet is bound to the user's key and a set of user attributes (discussed later on). The wallet is used to deposit and/or disburse points, stores the accumulated balance and thus constitutes the essential object to participate in the system.

**Point-of-Sale Certification** In order to be able to manipulate wallets in the scope of Deposit or Disburse each PoS needs a certificate that is signed by the operator. This certificate also contains a set of PoS attributes (discussed later on).

**Deposition** This task is executed between a user and an (offline) PoS to deposit points on the user's wallet. The user is always anonymous and the previous balance of the wallet remains secret. The value to be added may depend on publicly verifiable factors from outside the protocol (e.g. the good that is traded, the current time of day, …) as well as a combination of the user's, the current and previous PoS' attributes. Please note, that the operator can also play the role of a PoS as the operator can use a self-signed PoS-certificate. To put it in another way, the operator delegates some of its capabilities to manipulate a wallet to PoSes by issuing certificates.

**Disbursement** This task complements Deposit to enable users to disburse points. Disburse is not a mere "inverse" of Deposit, but has some distinct properties that set it apart from

Deposit. The concrete changes depend on the application. For most parts of this thesis, Deposit is executed between a user and the operator (instead of a PoS), the previous balance is unveiled (instead of being kept secret), and users are identified. Also, users do not receive an updated wallet (with a lower balance), but wallets are invalidated. However, users can obtain a new wallet by rerunning IssueWallet afterwards. A discussion why Disburse differs from Deposit is given in Section 2.3. There, also other variants of Disburse are presented.

**Double-Spending Detection** As the system is an offline scheme, malicious users might re-use an old state of their wallet instead of the most recent one. In other words, malicious users are not directly detained from rewinding to a previous, more expedient snapshot of their wallet and thus commit double-spending. To elude this problem IssueWallet, Deposit and Disburse generate double-spending tags that are eventually collected by the operator. The operator periodically runs DetectDS on its database to find pairs of matching double-spending tags and to identify fraudulent users.

**Wallet Blacklisting** With the help of the trusted dispute resolver, the operator is able to blacklist users. We assume that the dispute resolver convinces itself that either the user is fraudulent (e.g. by validating a proof of double-spending) or that the user has volunteered to be blacklisted (e.g. due to a lost wallet) out-of-band, before the dispute resolver consents to blacklist a user.

**Prove of Participation** In special scenarios that include a violation enforcer it might be necessary that users are able to prove their participation in a specific execution of Deposit without unveiling their complete internal state. See Section 2.3.3 for such a scenario.

## **2.3 Applications**

Before we proceed to further describe the scenario and eventually formally define the system, we take an excursion and bring forward some applications of anonymous point collection. Although many details of P5C are still left unspecified, we deem this necessary, as many of the functional features and their security properties are inspired from practical requirements. Hence, some definitions can only be understood with the "right" application in mind. In the following, we sketch important aspects when applying P5C in some selected applications. From a high-level perspective, applying P5C to these applications seems mostly straightforward. Nonetheless, there are some technical subtleties that needs to be considered:

• The representation of integers values in a finite group and the connected problem of over-/ underflows or wraparounds. This results into the asymmetry of Deposit vs. Disburse

• Depending on the application users need to be forced to actually run Deposit or Disburse, if doing so is not for their benefit despite the fact that they are anonymous.

Before we discuss some concrete applications and how the tasks of P5C are specifically used, we elaborate more on the asymmetry of Deposit vs. Disburse. First, we pin down the precise meaning of two colloquial terms in our context.

#### **Definition 2.2 (Price, Balance)**


In P5C the price and the balance are both encoded as elements of ℤ with being the primeorder of the used elliptic curve. An encoding of integers from ℤ in ℤ only makes sense relative to a fixed representation of ℤ . Two obvious representations are ℤ ≜ {0, … , − 1} ⊂ ℤ or ℤ ≜ {− −1 2 , … , 0, … , −1 2 } ⊂ ℤ. This poses the well-known problem of wraparounds and over-/ underflows. We use the terms in the following meaning.

#### **Definition 2.3 (Over-/Underflow, Wraparounds (informal))**


Shortly, for the scope of this thesis, an over-/underflow denotes an unintended change of sign due to passing an interval limit in the magnitude of , while a wraparound denotes an unintended change of sign due to passing the interval limit at zero.

For a minimum of 80 Bit of security, is a prime in the magnitude of 2²⁵⁴. For most applications a wallet typically starts with a balance of zero. If we assume that the price of each

transaction can be bounded by some reasonable value and there are only polynomially many transactions, then the event of an over-/underflow can be safely ignored. For example, if a point represents one cent, then a wallet has to conduct transactions with a total worth of 10⁷⁵ € before an over-/underflow happens. Hence, we do not consider any special precautions against over-/underflows. However, depending on the application, wraparounds might be a concern.

Surely, the trivial scenario is an application which allows arbitrary transactions in both directions without giving any importance to the sign of the balance of a wallet as long as the sign is correct. In this case, the representation ℤ ≜ {− −1 2 , … , 0, … , −1 2 } is the right choice and deposition and disbursement of points can be unified in the same task without further checks.

But for most application the range of acceptable balances is bounded at one side. For example, users must not disburse more points than they have previously deposited and thereby unnoticeable pile debt. In this case there is one "safe" direction leading away from the bound and one "unsafe" direction that needs some more precautions. This yields an asymmetry which is reflected by the two distinct tasks Deposit and Disburse. In the following we always use Deposit for the "safe" direction and Disburse for the "unsafe" direction. We deposit points by addition of positive values and disburse points by *subtraction* of positive values.

The task Deposit is conceptionally simpler and provides a very high level of secrecy. As Deposit represents the "safe" direction, a user always remains anonymous and the previous balance of the used wallet is never unveiled, when depositing points on the wallet.

The task Disburse must deal with potential wraparounds. In its simplest variant Disburse unveils the current balance of the wallet to the PoS (or operator) and the PoS (or operator) aborts, if more points shall be disbursed than are deposited on the wallet. Obviously, this may infringe upon privacy and there are a variety of applications where it might be desirable not to reveal the current balance. To overcome this issue, the Disburse protocol could alternatively be extended by a range proof system such as [CCs08; CLZ12]. Range proofs are formally defined in Section 6.2.7 and allow the user to prove in zero-knowledge that the current balance is higher than the amount of disbursed points. Although there has been great progress to increase the efficiency of those proof systems, they considerably slow down the execution on low-end hardware like mobile devices (cp. Chapter 9). Depending on possible real-time restrictions they are not always applicable. Please note, that even range-proofs are zero-knowledge, the PoS (or operator) learns the statement that the wallet's balance is sufficiently high, and thus Disburse is "less private" than Deposit. Additionally, for some applications it might also be expedient or even necessary, that Disburse unveils the user's identity to the PoS/operator. In Sections 2.3.1 to 2.3.3 we sketch applications for all these options.

#### **2.3.1 Customer Loyalty Systems**


As the most basic application, we outline how P5C can be used to create a privacy-preserving loyalty system for customer retention. We stress that we do not aim to imitate the modern loyalty programs such as Payback [PAY16] or Nectar [Aim16] whose primary focus is to analyze their customer's behavior, train a sales-response model and sell targeted advertisement. Here, we present a very basic loyalty program that resembles the classic trading stamps, i.e. a system that realizes a "buy , get one for free"-approach.

The operator is the merchant or an association of several merchants who jointly run the loyalty system. The roles of PoSes and users are obvious. To participate in the program users register themselves first and then obtain a wallet. Users collect points using Deposit which is completely anonymous and does not unveil the current balance. In order to redeem points (in exchange for some benefit) users run Disburse which unveils the total balance of collected points. The latter ensures that users cannot redeem too many points and obtain a negative balance.

In this scenario, we assume that there are many depositions prior to each disbursement. Hence, we assume that unveiling the balance during disbursement is a not a severe loss of privacy, especially if the majority of users disburse points when they have reached similar balances.

In this scenario a dispute resolver is not necessarily required. Keeping the idea of trading stamps in the back of our mind, we do not see a good reason why blacklisting should be necessary. Also, an expiration date could be used and encoded into the user attributes as an alternative to blacklisting. In this case, wallets that are not renewed drop out of the system after a short time period. Of course, a dispute resolver could be used to offer users an additional "backup service" that allows to restore their wallets in case of a loss or similar. However, using our full recalculation-mechanism for this issue feels like an overdone solution. A simple backup of the most recent state of the wallet at the user-side would do as well.

Also, a violation enforcer is not required. We assume users to voluntarily participate in Deposit, because they benefit from point collection. Vice versa, we assume users to obtain their reward (e.g. a free good) only *after* they have successfully completed Disburse.

**Simple Extensions** The basic application can simply be extended in several ways: Instead of a single "type" of points, multiple types of points can used to differentiate between different types of goods. In this case a wallet does not store a single counter but a vector. Also, the user attributes that are attached to a wallet could be used to distinguish between different types of customers and let the pricing function depend on that. On top, the user attributes can be used to limit the wallet's lifetime and encourage customers to collect and redeem points faster.

#### **2.3.2 Pre-Payment Systems**


A common application of pre-payment systems are micro-payments. First, users top up their wallet in exchange for real money and then successively spend their deposit. Typical examples are canteen systems, vending machines at work places, natatoriums or public transportation. Again, the roles of operator, PoSes and users are quite obvious.

Typically, user either present their wallet *once*, e.g. at a vending machine, or present their wallet *twice*, e.g. at a PoS upon entering *and* leaving. The latter allows the price to depend on the duration of the stay or the distance that has been traveled. To this end, the entry-PoS sets the attributes of the previous PoS in the wallet, but does not disburse any price. The exit-PoS reads the previous PoS attributes, clears them, calculates a price that may depend on user attributes, previous PoS attributes as well as its own attributes and disburses the price from the wallet. Of course, this way a pair of transactions between entry and exit becomes linkable, but not with other transactions. For the role of attributes and a detailed discussion see Section 2.4.

In a pre-payment scenario, topping up a wallet represents the "safe" direction and is realized by Deposit. As in Section 2.3.1spending points is the "unsafe" direction and realized by Disburse. But contrary to the previous example, a single Deposit transaction that is privacy-preserving per default is followed by a (long) sequence of Disburse transactions. Hence, unveiling the previous balance during each Disburse to vouch sufficient funds might allow to link individual transactions. Especially, if there is a small and fixed set of admissible prices and fractional balances. A more privacy-friendly solution are so-called range proofs that show in zeroknowledge that the previous balance is higher than the price to be withdrawn.

Including a dispute resolver into this scenario is at least useful, could foster the acceptance of an anonymous payment system or might even by required by legal regulations. Typically, the PoSes are unmanned turnstiles or similar physical barriers. In case a user has already lost points, i.e. Disburse completed successfully, but the barrier fails to open, a dispute can usually not be settled on the spot. Instead, users simply try a second time (at a different turnstile), provisionally volunteer to pay twice and file a claim afterwards. Here, a recalculation mechanism that selective lifts the anonymity of the questionable transactions is expedient. Also, a lost wallet can be blacklisted such that a potential finder cannot use it and the legitimated owner can be compensated for the remaining balance.

A violation enforcer is not required. As in Section 2.3.1 user will likely not prematurely abort Deposit, because they (physically) paid for the price to be deposited. In case of Disburse, the vending machine or turnstile physically ensures that access is only granted after the price has successfully been withdrawn.

#### **2.3.3 Post-Payment Systems**


A post-payment system is not simply the inversion of a pre-payment system as presented in the previous section. Post-payment systems are preferable over pre-payment system, if a high throughput of users is vital. In these scenarios, the speed of admissions must not drop either because users have simply forgotten to sufficiently top up their balance (as it might be the case in a pre-payment system) or because a transaction fails for other reasons. In some scenarios it might be even impossible or undesirable to prevent users from access to the good/service without paying first. In order to make this example more interesting and set it further apart from a pre-payment system, we focus on this special kind of scenarios.

This being said, a post-payment system differs from a pre-payment system in two aspects (despite the fact that the meaning of points is "inverted"). Firstly, users must be enforced to eventual clear their collected debt. Opposed to the previous examples, there is no inherent incentive for the users to do so. Secondly, "free-riders" who are able to gain admission at no charge must be pursued after the fact. Both aspects conflict with the anonymity of users.

In order to solve the first issue, a limited lifetime is encoded into a user's wallet as part of the user attributes. We assume that the system uses fixed billing periods, e.g. monthly billing periods, which are the same for all users. Prior to the beginning of a billing period, users obtain a fresh wallet from the operator using IssueWallet. As IssueWallet is identifying, the operator records which user owns a wallet for a particular billing period. Within a billing period, users collect debt at PoSes using Deposit. Please note, that adding points to the wallet actually means increasing the owed debt. Again, deposition is the "safe" direction, does not unveil the previous balance and is non-identifying. At the end of a billing period, users are requested to clear their owed debt by running Disburse with the operator. In this scenario, Disburse unveils the total balance and the user's identity such that the operator can invoice the user. As in Section 2.3.1 we assume that the total balance sufficiently masquerades the individual Deposit transactions. A successful disbursement invalidates the wallet. After having cleared the owed debt (in the

real world using a traditional payment method), users may run IssueWallet again to obtain a new wallet for the next billing period.

The dispute resolver is necessary, if users refuse to clear their last wallet and accept not to get issued a new one. In this case, the operator and the dispute resolver can jointly recover all transactions of the outstanding wallet and the operator can then pursue the debtor. Also, a dispute resolver is expedient for the same reasons as in Section 2.3.2, e.g. lifting privacy in case of a dispute or immediate blacklisting of lost or stolen wallets with the consent of the user.

In order to pursue free-riders that refuse to collect debt, but gain access to the service nonetheless, the involved PoS triggers the violation enforcer which persecutes and punishes the free-rider, if found guilty. To this end, the violation enforcer needs to identity the user outof-band. Typically, this involves taking a photo of the suspect using a camera that is mounted on every PoS but is owned and operated by the violation enforcer. We stress that we used the term "trigger" on purpose: The assume the communication between the PoS (or operator, resp.) and the violation enforcer to be one-way. Otherwise, a curious PoS/operator might be tempted to exploit the cameras in order to lift the privacy in each and every transaction. Due to technical limitations, it might be impossible to exactly determine which user triggered the camera. To settle this situation, the violation enforcer summons all users under investigation to run the task ProveParticipation. This task allows all innocent users to prove their participation in a matching Deposit task with the particular PoS.

We illustrate the scenario using electronic toll collection (ETC) in an open-road setting as a concrete example. This scenario is considered by Nagel et al. [Nag+20]. In this setting the violation enforcer is typically a police authority or another law enforcement agency. Moreover, users correspond to vehicles and PoSes to toll gantries. The dispute resolver could be an NGO or another public authority, like the data protection authority, a court or the department of justice. In an open-road setting, vehicles pass through toll gantries at normal travel speed and are tolled in transit. Due to a variety of reasons, a vehicle might simply pass the toll gantry without collecting debt. In these cases, the toll gantry triggers a camera. Especially in case of multi-lane roads, several photos of more than one vehicle being in the range of the toll gantry are taken or a single photo might show several suspects driving close to each other [Kap18].

We like to stress two aspects: Firstly, a user only needs to participate in ProveParticipation, if something has failed and if the user is in the set of suspects. Secondly, we assume that all erroneously suspected users volunteer to run ProveParticipation. Moreover, in any case, users always have the option to appeal to the dispute resolver and thereby unveil its successful participation in a transaction at the PoS under audit. But, the dispute resolver unveils *all* transaction of a particular wallet, while a ProveParticipation only proves the participation in a *single* transaction. This means, ProveParticipation is more selective and less privacy infringing. **Simple Extensions** The user attribute can not only be used to limit the wallet's lifetime, but could encode further attributes to distinguish between different classes of users as in Section 2.3.1. Also, the previous PoS attribute could be encoded into the wallet as in Section 2.3.2 to realize distance-based tolling. Then, the pricing function may be dynamic and depend on different factors like the current time and congestion, the number of axles (recognized by sensors attached to the PoS), some user attributes attached to the wallet as well as some attributes of the previous PoS the user drove by.

#### **2.3.4 Further Applications and Running Prime Example**

The aforementioned examples are by far not a complete list. For example, in an anonymous reputation system with discrete reputation levels that correspond to intervals of reputation points, e.g. 0–9 corresponds to novice, 10–19 is beginner, up to 90–99 for expert, the range of admissible points is bounded at both sides. This peculiarity is not covered by any of the examples above. In [Nag+17] we sketched how this can be realized without using range-proofs if combined with an anonymous reputation system. Nonetheless, we deem this list a sufficient indication how to apply anonymous point collection to further applications.

For the remainder of this thesis, we use the post-payment system from Section 2.3.3 as our prime example and baseline. The sketched post-payment system requires the most features (such as a violation enforcer) from all examples. This also implies that—if not stated otherwise we consider a variant of Deposit that is executed between a user and the operator, unveils the balance and identifies the user. This variant is used to formally define anonymous point collection in Chapter 4 and to realize P5C in Chapter 7. Concentrating on a single variant keeps the presentation simpler than tedious case-by-case distinctions.

## **2.4 Attributes, Pricing Function and Privacy Leakage**

Our system involves two types of attribute vectors: user attributes <sup>U</sup> as well as PoS attributes P. User attributes are stored in the wallet and are set when the wallet is issued. PoS attributes are part of the PoS certificate and set when the PoS is certified. Moreover, the attributes of the participating PoS are written to the wallet when the wallet is issued⁴ and when points are deposited or disbursed. In other words, a wallet carries the attributes of the previous PoS it has interacted with besides its own user attributes. As a trivial generalization the attributes of a PoS could be split into a "full" attribute vector that is attached to the PoS' certificate and a sub-vector of attributes that is carried by the wallet as the (partial) attributes of the previous PoS. For the ease of notation, we only consider a single vector.

⁴ In IssueWallet the operator plays the role of a PoS using a self-signed PoS certificate

We do not stipulate which kind of attributes or how many of them are used. Those details depend on the concrete pricing model of the application with a pricing function ≔ price(U, P, prev P , *aux*) depending on the user attributes, the current and previous PoS attributes P, prev P and auxiliary, publicly verifiable input *aux* (e.g., time of day, weather conditions, …). However, we expect that for most scenarios very little information needs to be encoded into these attributes. Typical examples have been sketched in Sections 2.3.1 to 2.3.3. For instance, limiting the validity of a wallet is quite common. Either to animate users to increase the volume of sales or—if points on a wallet represent some kind of debt—to actually force users to eventually clear their debt. Clearly, for privacy reasons, unique expiration dates in attributes need to be avoided. In case of unattended PoSes, one might want PoSes to also have an expiration date which is periodically renewed and encoded as a PoS attribute. As the secrets of a PoS allow to tamper with a wallet's balance, such an expiration date mitigates the damage of a stolen or compromised PoS. Also pricing models that are based on a distance between two PoSes or the duration of admission can be realized by using PoS attributes to distinguish between entry and exit PoSes.⁵

Obviously, the concrete content of the attributes affects the "level" of user privacy in an instantiation of our system. In case of our running prime example (cf. Sections 2.3.3 and 2.3.4) the goal is to provide provable privacy up to what can be possibly be deduced by


Our framework guarantees that protocol runs of honest users do not leak anything (useful) beyond that (cp. Chapter 5). In Section 5.3, we analyze the impact of these attributes on the privacy leakage of our system in more detail.

Item (1) of the previous paragraph already indicates that in our design of an anonymous point collection system the price of a transaction is determined by the PoS⁶ that unilaterally evaluates the pricing function price and thus needs to know the attributes of the user and previously visited PoS. This design might seem not as "ideal" as it could be and comes with two obvious cutbacks at first glance. It may needlessly infringe upon the user's privacy and the PoS could deviate from the "right" price. This has been an intentional design decision with respect to real-world applicability.

⁵ In this way, the entry and exit point can be linked. Still, our system ensures that the user is anonymous and multiple entry/exit pairs are unlinkable.

⁶ In the next few lines, we only use the term PoS, but what is said also applies to the operator.

Please remember, that the PoS and user must efficiently evaluate the pricing function price themselves in the real implementation without the help of a third party due to offline capabilities. Ultimately, this boils down to two options: Either the pricing function is evaluated in the clear which implies that both parties learn their mutual inputs, or the PoS and user run some sort of secure two-party computation (2PC).

Depending on the complexity of the pricing function general 2PC techniques might be too inefficient to meet the real-time requirements of most applications, especially if low-end devices like smart phones are involved. Note that it does not suffice to pass the attributes as input to the 2PC and merely evaluate the pricing function, but the 2PC must also ensure that the "correct" inputs are used, i.e. those that are attached to the wallet and the PoS certificate. Skipping ahead to the implementation, this would imply—among other things—to validate signatures inside of 2PC. Of course, there might be extremely simple pricing functions that also exhibit some kind of "structural compatibility" with the building blocks of an instantiation of our scheme such that one can abstain from general 2PC technique but resort to tailored techniques that nicely interplay with the other building blocks in a white-box fashion. However, these cases are presumably rare. Also, evaluating the pricing function with 2PC and keeping the inputs secret has only a beneficial impact on the achieved "level" of privacy, if the pricing function is sufficiently intricate. If the pricing function allows to infer the inputs from the price, using 2PC yields no benefits. In summary, we conjecture that most application fall into one of the following three categories:


In conclusion, we are convinced that evaluating the pricing function in the clear, is the right choice.

Also, with the timing-constraints of typical applications in mind, we assume that users preliminarily accept any price willingly in order to proceed and (in case of a dispute) file an out-of-band claim later. To this end, Fapc outputs all relevant information about the transaction to the users. This enables them to check the price themselves and to appeal afterwards if the wrong amount of points is deposited. In the real world, this detectability will deter PoSes from manipulation.

In order to allow users to assess the privacy of a particular instantiation of our framework, we recommend that all attributes, all possible values for those attributes and how they are assigned, as well as the pricing function are fixed in advance and public. In this way, the operator is also discouraged from running trivial attacks by tampering with an individual's attribute values (e.g., by assigning a billing period value not assigned to any other user). To this end, a user needs to check if the assigned attribute values appear reasonable. Such checks could also be conducted (at random) by a regulatory authority or often also automatically by the user's device. Likewise, a PoS could try to break the privacy of a user by charging a peculiar price. However, these "attacks" cannot be ruled out by cryptographic means but are immanent to the application. Again, we assume that watchful users file a claim in such a case which may lead to an audit.

## **2.5 Handling of Aborts**

To enable offline capabilities IssueWallet, Deposit and Disburse all generate double-spending tags which are eventually collected by the operator. If the operator encounters a pair of matching double-spending tags the associated, fraudulent user is identified. If Deposit is aborted after this double-spending tag has been generated but before the user has received a new wallet state, the user is left with an already used wallet state. In this case users have two options: (1) Either they re-use this wallet state in the next transaction and thus deliberately commit double-spending, or (2) they contact the operator to be issued a new wallet. In both cases the user is identified. Thus, an abort during Deposit allows to partly lift privacy. We stress two points here: Firstly, privacy is only lifted for one particular transaction. All remaining past and future transactions are unaffected. Secondly, privacy-under-abort is a well-known open problem and not specific to our system.⁷

We expect unintentional aborts to only occur infrequently and hence result in an acceptable level of privacy infringement. Furthermore, the operator and PoSes have very little reason

⁷ Without perfect fair exchange, either the double-spending tag is created before the users obtain a new, valid state of their wallet or vice versa. In the first case, it is always possible to abort such that users are left with an invalid wallet. In the second case, a malicious user could purposely abort before a double-spending tag is generated. This would foil double-spending detection altogether.

to abort purposely: They cannot target specific users, as the user is completely anonymous during Deposit. Only *after* an PoS aborts will the operator learn which user's privacy they have infringed upon. Hence, PoSes would have to abort a substantial amount of transactions in order to gain useful information. Doing so would draw attention, certainly lead to an audit of the operator and thus be contrary to their business interests.

In the opposite direction, the system or the surrounding application must ensure that a user does not benefit from an abort:


In any case, if a user aborts too late, they are identified by the double-spending mechanism.

Aborts of any task other than Deposit are trivially handled by repetition, as the involved parties are non-anonymous anyway. In the remainder of this thesis we ignore aborts.

## **2.6 Desired Properties**

With the applications from Section 2.3 at the back of our mind, the following list summarizes some exemplary, informal and desirable high-level properties that one would reasonably expect from anonymous point collection. These properties inspire the eventual definition of the ideal functionality in Chapter 4. Note that the ideal functionality (and not this list) is meant to formally conceive the security of our proposed protocol. Nonetheless, this list may help to better understand the scenario and concludes this chapter. In Chapter 5 we demonstrate how these high-level goals are reflected in the ideal functionality. We stress that some of these goals are not immediately achieved by the ideal functionality alone (for example property (P5)), but only make sense in combination with the outer application. For these cases, our system exports appropriate interfaces to enable the outer application to implement this feature.


## **3 The UC Model**

We model P5C within the UC-framework by Canetti [Can01], which is a simulation-based security notion. The UC-framework carries forward the tradition of simulation-based security definitions for general protocols in an arbitrary context ([GMW86; Bea92; MR92]).

Simulation-based security notions come with the great advantage over other kinds of security notion that they usually explicate the achieved "level" of security very clearly. This stands in contrast with game-based definitions, where security is expressed by a number of games. A set of individual security games always bears the inherent danger that an important aspect is overlooked, thus not captured by any game and thereby inadvertently claiming an insecure system as secure. Guarantees expressed in a simulation-based notion usually have more evident semantics, obtained from directly considering how a protocol is used, rather than a hypothetical interaction of an adversary with a simplified game that encodes excluded attacks. More importantly, simulation-based security notions are also very good at making explicit what *cannot* be achieved.

The great advancement of the UC-framework over previous work is its general composition theorem. Security under (universal) composition is a very strong notion. The guarantees are provided even if a protocol is executed in an arbitrary environment, alongside other protocols. Moreover, composable frameworks facilitate modularity. One can define components with clean abstraction boundaries and use their idealized versions in a higher-level protocol. The overall security of the composed protocol follows from the composition theorem.

After its first publication the UC-framework and the independent, but conceptionally very similar work by Pfitzmann and Waidner [PW01] have spawned a long series of further research on security definitions. Besides major revisions [Can00; Can05; Can13; Can18] extended frameworks either for generalized settings or for broadened problem definitions [Can+07; CV12; CR03] were proposed. Pass [Pas03], Prabhakaran and Sahai [PS04], Barak and Sahai [BS05], Canetti, Lin, and Pass [CLP10], and Broadnax et al. [Bro+17] analyzed relaxations of the UC-framework that require less assumptions but still yields a meaningful, (somewhat) composable security notion. Katz [Kat07] investigated alternative setup assumptions. Moreover, UC-compatible security definitions for most cryptographic primitives have been formalized (see [Can05] for an overview). There are other conceptionally very similar frameworks that also

come with a general composition theorem [BPW04; Küs06; HS15; Mau11; MR11]. Nonetheless, the UC-framework remains the de-facto standard to prove security of a protocol.

This chapter is organized as follows. In Section 3.1 we give a condense overview and line out the "big" picture. Sections 3.2 and 3.3 proceed with a formal definition. This formal definition is based on a compilation of [Can05; Can13] with some backported fixes from [Can18; HS15]. Although we slightly deviate from the original UC framework and also clarify some aspects where the original framework is ambiguous, these details are not crucial and thus the Sections 3.2 and 3.3 do not contain new information for readers who are familiar with the UC framework. Hence, the expert reader may skip these sections and proceed with Section 3.4. Nonetheless, we deem a formal foundation necessary, in order to soundly discuss the communication model. Shortly stated, the original UC-framework uses—what we call— "identity-based" addressing to send messages between parties. Clearly, this foils any attempt to define a protocol with anonymous parties right on the definitional level. We clarify this and related issues in Section 3.4. Also, we define some custom ideal functionalities for secure and anonymous communication in Section 3.4. This chapter concludes with Section 3.5 on setup assumptions, some well-known functionalities from the literature and implicit writing conventions for ideal functionalities.

## **3.1 Overview on the UC Framework**

In the UC-framework, an ideal functionality F (acting as TTP) is defined that plainly solves the problem at hand in a secure and privacy-preserving manner. A protocol is said to be a (secure) *realization* of this ideal functionality F if no PPT-machine Z, called the *environment*, can distinguish between two experiments: the *real experiment* (running ) and the *ideal experiment* (using F).

In the *real experiment*, Z interacts with parties running the actual protocol and is supported by a real adversary A. The environment Z specifies the input of the honest parties, receives their output and determines the overall course of action. The adversary A is instructed by Z and represents Z's interface to the network, e.g., A reports all messages generated by any party to Z and can manipulate, reroute, inject and/or suppress messages on Z's order. Moreover, Z may instruct A to corrupt parties. In this case, A takes over the role of the corrupted party, reports its internal state to Z and from then on may arbitrarily deviate from the protocol in the name of the corrupted party as requested by Z.

In the *ideal experiment*, on the other hand, the protocol parties are mere dummies that pass their input to a trusted third party F and hand over F's output as their own output. The ideal functionality F executes the task at hand in a trustworthy manner and is incorruptible. The real adversary A is replaced by a simulator S. The simulator must mimic the behavior of A, e.g., simulate appropriate network messages (there are no network messages in the ideal experiment), and come up with a convincing internal state for corrupted parties (dummy parties do not have an internal state).

If no environment Z can tell executions of the real and the ideal experiment apart, then any successful attack existing in the real experiment would also exist in the ideal experiment. Therefore, the real protocol guarantees the same level of security as the (inherently secure) ideal functionality F.

Regarding privacy, the situation in UC is somewhat unsatisfying. As far as *input privacy* is concerned, the UC framework perfectly suitable. Note that all parties (incl. the simulator) use the ideal functionality as a black-box and only know what it explicitly allows them to know as part of their prescribed output. The output to the simulator is called leakage. This makes UC suitable to reason about input privacy in a very nice way. As no additional information is unveiled, the achieved level of input privacy can directly be deduced from the defined output of the ideal functionality. In other words, the privacy assessment can be conducted onto the ideal functionality and is completely decoupled from the analysis of the protocol implementation. The proof of indistinguishability asserts that any secure realization of the functionality provides the same level of privacy.

With respect to *sender privacy*—or anonymity—the UC framework is somewhat flubbed. Strictly speaking, it is impossible to achieve anonymity in UC due to the way how message routing and transportation is formally defined in [Can00; Can13; Can18]. If a party wants to send a message to another party, the actual message has to be prefixed with the sender's and receiver's identity which are used as addressing information. The message is then handed over to the adversary for delivery who may alter the message (including the addressing information) unless authenticated channels are assumed. But even in the plain model without authenticated channels the addressing information still exists as a prefix to the message and thus is learned by the receiver. We cope with this issue by unhinging the (implicit) message transportation from the UC-framework, defining a couple of ideal functionalities and thereby making message transportation explicit.

## **3.2 The Formal Model of Computation**

In UC, the basic entity of computation is a Turing machine (TM). Conceptionally, UC distinguishes between interactive Turing machines (ITMs) and interactive Turing machine instances (ITIs). An ITM is an intangible and static object, while an ITI is a concrete instantiation of an ITM. The ITM defines all common characteristics, especially the code. An ITI is the runnable realization of an ITM and has a well-defined internal state (given by the content of its tapes, see below).

**Definition 3.1 (Interactive Turing Machine (ITM))** *An* interactive Turing machine *is a probabilistic Turing machine with the following properties. Besides its usual working tape, it has the following tapes:*

	- *(1)* An incoming message tape
	- *(2)* An input tape

*They are the counterparts to the outgoing message tape and output tape, resp. They serve as inboxes and are non-writable with respect to the possessing ITM.*

*•* A random tape *A read-only tape that contains a uniformly drawn bit string of sufficient length. We assume that this tape cannot be exhausted.*

*An ITM has two additional instructions:*


Please note, that an ITM supports two different halt states: a) The usual halt state identical to an ordinary TM. The ITM transits into this halt state if prescribed by its program. In this case the ITM cannot be activated again, but is "dead". b) A temporary halt state adopted by the ITM due to a write‐external. In this state, the ITM "sleeps" until it is activated again.

**Definition 3.2 (Interactive Turing Machine Instance (ITI))** *An instance* M *of an ITM is defined by its extended identity id* = (*prg*, *id*)*.*

The number and names of the different tapes related to messaging have evolved over the different revisions of UC [Can00; Can05; Can13; Can18]. For Definition 3.1 we chose a variant that we believe to be the "Best-Of". The number of tapes is the same as in [Can00]. Here, we have a pair of two tapes for passing local data (input/output tape) and a pair of tapes for network messaging (incoming/outgoing message tape). We slightly changed their names such that their pairwise association becomes more evident.

We now define a system of ITIs. The following definition can be thought of as a set of rules how ITIs are allowed to interact with each other. To increase the flexibility and to enable different models of computation [Can13; Can18] takes a two-step approach and introduces a so-called global control function as an intermediate step. To put it simple, this global control function is a bit-valued function {0, 1}<sup>∗</sup> → {0, 1} that determines whether a write‐externalcommand is allowed (or not) depending on the content of the tapes of the involved TMs. In the second step, [Can13; Can18] concretely instantiates this global control function. However, the Universal Composition Theorem (implicitly) assumes that the global control function is exactly instantiated as is. We do not define such a function separately but incorporate it into the definition of a system of ITIs.

Moreover, we push forward a definition on the identification of ITIs.

**Definition 3.3 (Party Identifier (PID), Session Identifier (SID))** *We assume that the identity string of an ITI is structured as id* = (*pid*,*sid*)*. The first part is called the party identifier (PID) and the second part is called session identifier (SID) of the ITI.*

The reason for this convention becomes clear after the next definition. The following definition appears to be very long-winded and cumbersome. An intuitive explanation that makes this definition actually trivial follows below.

**Definition 3.4 (System of Interactive Turing Machine Instances)** *A system* = ⟨Z, A⟩ *of ITIs is defined by two ITIs* Z *and* A *that generate the system.* Z *is called the initial ITI or the environment.* A *is called the adversary. More ITIs* M ∈ *are invoked while the system is executed. If an ITI* M *invokes (i.e. creates) a new ITI* M′ *,* M *is called the direct parent of* M′ *and* M′ *is called the direct subsidiary of* M*. In the following let* = (*id*snd, *id*rcv, ) *denote the extended message that has been passed as the parameter to* write‐external*. Also let id*snd = (*prg*snd, *id*snd) *and id*snd = (*pid*snd,*sid*snd) *denote the extended ID, the code, the ID, the PID and SID of the* claimed *sender. Likewise, id*rcv = (*prg*rcv, *id*rcv) *and id*rcv = (*pid*rcv,*sid*rcv) *denote the same for the* claimed *receiver. Moreover, id* = (*prg*, *id*) *and id* = (*pid*,*sid*) *belong to the* true *sender and id*′ = (*prg*′ , *id*′ ) *and id*′ = (*pid*′ ,*sid*′ ) *belong to the* true *receiver. Beware, that the claimed sender/receiver does not necessarily equal the true sender/receiver. If we say the execution fails, the system immediately holds and outputs a special failure symbol. Then, the execution of on input is governed by the following rules:*

	- *(1) If* Z *uses the outgoing message tape, the execution fails.*
	- *(2) If* Z *uses the output tape:*
		- *(a) If pid*snd ≠ *pid*rcv *holds, the execution fails.*
		- *(b) If the destination ITI* <sup>M</sup>′ *with id*′ <sup>=</sup> *id*rcv *does not exist, a new ITI* <sup>M</sup>′ *with id*′ = (*prg*′ , *id*′ ) *is invoked and is written on its input tape. The new ITI* M′ *becomes a direct subsidiary of* Z *and* Z *its direct parent.*
		- *(c) If the destination ITI* <sup>M</sup>′ *with id*′ <sup>=</sup> *id*rcv *exists:*
			- *(i) If prg*′ ≠ *prg*rcv*, the execution fails.*
			- *(ii) If id*′ *is not a direct subsidiary of* Z*, the execution fails.*
			- *(iii) If id*snd *does not equal the extended identity that has been used by* Z *as the sender's identity when* write‐external *was called the first time for this particular receiver, the execution fails.*
			- *(iv) Else, is written on the input tape of* M′ *.*
	- *(1) If* <sup>A</sup> *uses the outgoing message tape, and the destination ITI* <sup>M</sup>′ *with id*′ <sup>=</sup> *id*rcv *exists, is written onto its incoming message tape, else the execution fails.*
	- *(2) If* A *uses the output tape and id*rcv *is the identity of* Z*, is written on the input tape of* Z*, else the execution fails.*
	- *(1) If* M *uses the outgoing message tape and id*snd = *id and sid*snd = *sid*rcv *holds, then is written onto the incoming message tape of* A*, the execution fails.*
	- *(2) If* M *uses the output tape:*
		- *(a) If id*snd ≠ *id or pid*snd ≠ *pid*rcv *holds, the execution fails.*
		- *(b) If the destination ITI* <sup>M</sup>′ *with id*′ <sup>=</sup> *id*rcv *does not exist, a new ITI* <sup>M</sup>′ *with id*′ = (*prg*′ , *id*′ ) *is invoked and is written on its input tape. The new ITI* M′ *becomes a direct subsidiary of* M *and* M *its direct parent.*
		- *(c) If the destination ITI* <sup>M</sup>′ *with id*′ <sup>=</sup> *id*rcv *exists:*

We discuss this definition on two counts: Firstly, we give a graphical explanation that makes the definition more memorable, secondly, we highlight the differences to the original definition(s).

We start with "normal" ITIs (cp. Fig. 3.1). A "normal" ITI—neither the environment nor the adversary—can best be depicted as a single process running on a usual physical computer. The subset of all ITIs that share the same PID are located on the same machine, i.e. they constitute

Figure 3.1: A System of ITIs

ITI Input/Output Incoming/Outgoing Message

a party (framed by dashed line in Fig. 3.1). Within a party the involved ITIs communicate via their input/output tapes; the input/output tapes must not be used for communication with ITIs belonging to other parties (cp. Item (2a)). This kind of communication is secret, trustworthy, reliable and immediate. Especially, the ITIs know each other's code (cp. Item (2c-i)). This models the fact that some "main process" usually knows the called "subroutine function". Moreover, the caller must not lie about its own identity (cp. Item (2a)). The only exception to this rule affects the output of the root ITIs to the environment Z (cp. Item (2c-iii)). Here, the environment that initially has invoked the root ITIs and thus has determined their code *prg* has no guarantee that the ITI returning output to the environment actually runs the stipulated code.¹ New subsidiary ITIs (e.g. "subroutine functions") are implicitly created, if they are called for the first time (cp. Item (2b)). Communication is only allowed along the calling graph, i.e. the hierarchy of ITIs within a party forms a tree (cp. Item (2c-ii)).

While parties group ITIs "vertically", sessions group ITIs "horizontally" (cp. framed by dotted line in Fig. 3.1). ITIs belonging to the same session can best depicted as the different legs of a communication channel (e.g. the initiator and the target of a TLS connection). ITIs of the same session use the incoming/outgoing message tapes for communication. Again, the sender must not lie about its identity and is only allowed to send messages to other ITIs of the same session (cp. Item (1)). However, there are no security guarantees whatsoever as all incoming/outgoing messages are routed through the adversary A who represents an unreliable and untrustworthy network.

The environment Z is the initial ITI of the whole system. It binds together all parties and creates their root ITIs. As its name suggests, Z constitutes the environment in which the parties are executed and also incorporates any other processes that run concurrently. Intuitively, Z represents the "mastermind" that controls all parties by purporting their input and processing their output. Also, Z is allowed to communicate with the adversary A via input/output, but must not participate in the network itself (cp. Item (1)). If Z wishes to do so, it may request A for that (see below). To enable Z to be the parent of root ITIs of different parties, the restrictions on using its input/output tape are relaxed compared to "normal" parties. The environment Z is allowed to impersonate different parties. If Z originally invokes the root ITI of a not yet existing party it must do so using the designated PID (cp. Item (2a)) of the new party. Note, that neither Item (2a) nor Item (2b) demand that the "claimed" extended identity *id* of Z as a sender must be Z's true identity. However, if Z has called an ITI once, it has to do so consistently (Item (2c-iii)).

¹ This detail becomes important for the definition of protocol simulation.

The adversary A represents the network. As such, it must not use (local) input/output except for the communication with Z (cp. Item (2)). Any message that is written by any ITI M onto its outgoing message tape is handed over to A. Although, M is not allowed to claim any other sender than itself, there are no restriction on how the message is handled by A. Hence, A may arbitrarily manipulate, reroute, inject and/or suppress messages. A may send any (extended) message to any incoming message tape of any (existing) ITI with no restrictions on the claimed sender identity (cp. Item (1)).

Finally note that the model of asynchronous execution in UC is conceptionally very similar to what is called *preemptive multitasking* in the field of operating systems. At every point of the execution only a single ITI is active and the scheduling is message-driven. In other words, an ITI remains active until it voluntarily waives activation by passing a message to another ITI.

**Deviations from the original UC framework** The above definition enforces that the hierarchy of ITIs belonging to the same party is a tree and ensures that passing local input/ output is only allowed along this hierarchy. As other aspects of the framework the concrete details have evolved over time and [Can05; Can13] explicitly claims² to allow arbitrary local communication among ITIs of the same party in order not to unnecessarily exclude certain models of computation. However, a non-hierarchical system of ITIs turns out to be problematic with respect to the composition theorem and corruption. Hofheinz and Shoup [HS15] showed that the composition theorem as originally be stated in [Can00] does not hold and therefore only consider trees for their own GNUC framework. To remedy this problem, Canetti [Can05] introduces the concept of *subroutine-respecting protocols*³ in a first step. In a second step, [Can18] additionally demands protocols to be *compliant*. In order to avoid these technicalities all together, we follow the approach of Hofheinz and Shoup [HS15] and simply restrict the framework to parties with a tree-like calling hierarchy.

Moreover, Definition 3.4 clarifies that a new ITI is only allowed to be created by its parent via passing (local) input to it for the first time. In the original UC framework, a new ITI is created on-the-fly whenever a message of any kind is delivered to it, i.e. a new ITI may also be created by the adversary delivering an incoming (network) message to a non-existing ITI. Again, this flexibility raises some definitional problems. In Definition 3.4 the adversary's attempt to deliver an incoming (network) message to a non-existing recipient simply fails. Considering real computers and real programs we deem this clarification sufficient. The system of ITIs belonging to the same party must be created bottom-up from its root and the receiving end of a communication needs to be created first and then wait for incoming messages.

² However, this is not reflected by the formal definition of the control function

³ Subroutine-respecting protocols do not necessarily adhere to a tree-like hierarchy but must not be arbitrary neither.

## **3.3 UC Protocols and Protocol Emulation**

After having defined the computational model in the previous section this section defines how to use it to model interactive computer programs and define their security through *emulation*.

#### **Definition 3.5 (Protocol, Protocol Instance)**


In the UC framework the terms "code", "protocol" and ITM are somewhat synonymous and frequently used interchangeably. If one would like to discriminate, one might say that an ITM defines what can be computed in principle (i.e. defines the limits of the computational model), a code defines how something is computed (i.e. a list of instructions) and a protocol is both combined together (i.e. a code that is compatible with the capabilities of an ITM). If a protocol comprises different roles (e.g. the sender and receiver of a commitment protocol), the code includes the code for all roles and the particular role of an ITI within the session is selected through appropriate input upon invocation. Please note, that a non-interactive program that is executed on a single ITI is also called a protocol.

The UC framework defines security as a relative concept by comparison of a protocol to some other protocol and stating that the former is secure simply means that it is as least as secure as the latter. Hence, in order get any useful results, we need something that we assume to be inherently secure as our comparison object. These objects are called *ideal functionalities* and are defined next.

**Definition 3.6 (Ideal Functionality)** *An ideal functionality* F *is an ITM with the following special properties:*


**Definition 3.7 (Ideal Protocol, Dummy Party)** *A* ideal protocol IDEAL<sup>F</sup> *consists of an ideal functionality* F *together with an ITM, the so-called dummy party.⁴ An instance of an ideal protocol*

⁴ Canetti sometimes uses the term party synonymic for a single ITM or ITI. Although the dummy party is a particular ITM (and not a set of ITIs sharing the the same PID) we keep this term.

Figure 3.2: A System of ITIs with an ideal functionality F

*consists of an instance of* F *and one instance of the dummy party for each PID that* F *passes output to. The instances of the dummy party have the same SID as the instance of* F*. If an ITI* M *invokes an ITI with the code of* F *with id* = (*pid*,*sid*) *for the first time, an instance of* F *with id*<sup>F</sup> = (⊥,*sid*) *and an instance of the dummy party with id*dummy = (*pid*,*sid*) *is invoked. The dummy party becomes a subsidiary of* M*. If another ITI* M′ *belonging to a different party passes input to an instance of* F *with id* = (*pid*′ ,*sid*) *and there is already an ITI with the code of* F *and SID sid, then only a suitable dummy party with id*′ dummy = (*pid*′ ,*sid*) *is invoked. Dummy parties simple pass input/output between their parent ITI and the instance of* F*.*

Sloppily, ideal functionalities are thought to span across multiple parties and be part of each party via one dummy party (cp. Fig. 3.2). This allows ideal functionalities to conduct distributed tasks using (local) input/output only and evading the adversary for remote messaging.

A reasonable security framework also requires a mechanism for the adversary to corrupt ITIs.

**Definition 3.8 (Corruption)** *The adversary* A *is allowed to* corrupt *ITIs with pid* ≠ ⊥*. In this case the content of all tapes of the corrupted ITI is handed over to* A*. From then on,* A *impersonates* *the corrupted ITI. Whenever (local) input is passed to the corrupted ITI from its parent or from one of its subsidiaries via* write‐external*, the message is written onto the input tape of* A *and* A *gets activated. Vice versa, the adversary* A *is allowed to pass input to the parent or the subsidiaries of the corrupted ITI as if the message came from the particular ITI.*

Descriptively, corruptions can be depicted as if the adversary incorporates the corrupted ITI and all communication lines from or to the ITI are re-attached to the adversary.

Please note, that the condition *pid* ≠ ⊥ prevents ideal functionalities from being corrupted. However, dummy parties are corruptible and thus the adversary learns all past input/output of the ideal functionality for the particular party. Moreover, the ideal functionality is notified that the dummy party is corrupted. We stress, that the code of an ideal functionality may depend on the corruption status of its associated dummy parties.

**Deviations from the original UC framework** Again, Definition 3.8 is more restrictive than the original corruption mechanism. In [Can00; Can05; Can13; Can18] the adversary corrupts an ITI through sending a special corrupt message to the ITI's incoming message tape in order to express its wish to corrupt the recipient. The ITI can then decide to ignore the request, to surrender completely (as above) or to do something else; typically, this means to alter some parts of its tapes (e.g. erase secret keys) before handing over the tape's content to the adversary. This enables different kinds of corruption models. The corruption model underlying Definition 3.8 in case of "normal" ITIs (not a dummy nor an ideal functionality) is called the *Byzantine corruption model* in [Can00; Can05; Can13; Can18] and is the mostly used one. The fact that the adversary learns all past input/output upon corruption of a dummy party, is called *standard corruption* in [Can05]. All ideal functionalities in [Can05] are of this type. We deem Byzantine corruption sufficient for two reasons. Firstly, considering real software it seems peculiar that an ITI should have the power to object to corruption. Normally, "programs" do not know if they are corrupted and allowing ITIs to run arbitrary code upon corruption might encourage the definition of protocols (in pseudo-code) that turn out to be unimplementable in the "real world" using real programming languages. Secondly, the UC framework provides an alternative approach to model incorruptible elements. Of course, there might be valid use cases where a more fine-grained reaction to corruption is desirable and is an essential part of the security concept. If certain parts of an ITI should withstand corruption and other parts not, then the system should be re-factored with the incorruptible parts being outsourced to an independent component that is modeled as an ideal functionality. We deem this alternative approach to be "the right one" as it spells out the required trust assumptions more explicitly.

Finally, we are ready to define the UC experiment and UC security.

**Definition 3.9 (The UC Experiment)** *Let the environment* Z *and the adversary* A *be two ITMs as in Definition 3.4 and a protocol as in Definition 3.5. Then* EXEC,A,Z(1 ) *is defined as the execution of the system* ⟨Z, A⟩ *on input* 1 *with the additional restriction that any ITI invoked by* Z *must have the same SID and the code of the ITI is (silently) enforced to be . The output of* EXEC,A,Z(1 ) *is the output of* ⟨Z, A⟩*. The protocol is called the* challenge protocol*.*

**Definition 3.10 (Protocol Emulation, UC Realization, UC Security)** *Let , be two protocols. We define*

$$
\pi \succeq\_{\mathsf{UC}} \varphi \qquad \mathrel{\vbox{\hbox{\$::\$}}} \varphi \qquad \mathrel{\vbox{\hbox{\$::\$}}} \forall \ \mathsf{AT} \, \mathsf{S} \, \mathsf{V} \, \mathsf{V} \, \mathsf{Z} : \mathsf{EXEC}\_{\pi, \mathsf{A}, \mathsf{Z}} \mbox{(1\$^{n})} \stackrel{\mathsf{\hbox{\$::}}}{\equiv} \mathsf{EXEC}\_{\varphi, \mathsf{S}, \mathsf{Z}} \mbox{(1\$^{n})} \tag{3.1}
$$

*In this case we say emulates or is a (UC-)secure realization of . The ITI* S *is called the simulator. Likewise, the left UC-experiment* EXEC,A,Z(1 ) *is called the* real game *and the right UC-experiment* EXEC,S,Z(1 ) *the* simulated game *or* ideal game*.*

Sloppily, UC-realizes means that no environment Z is able to distinguish if it is interacting with an instance of the protocol and a (real) adversary A or if it is interacting with an instance of the protocol and a simulator S mimicking the behavior of A. The order of quantifiers is important, i.e. the simulator S may depend an A but must not depend on the environment Z. We highlight to definitional issues: As the environment Z believes to interact with instances of Definition 3.9 enforces the challenge protocol to run the correct code agnostic to Z. Hence, the challenge session is an instance of in the ideal game although Z believes to invoke instances. For the same reason, Item (2c-iii) in Definition 3.4 ensures that the sender's identity (which encodes the sender's code) is erased if instances of the challenge protocol pass output to Z. Otherwise Z could trivially distinguish the games.

Definition 3.10 quantifies over two adversarial entities: the adversary A and the environment Z. The definition of UC-emulation can equivalently be rephrased such that only one specific adversary, the so-called dummy adversary D, needs to be considered. This greatly simplifies the application of Definition 3.10, as consequently only a specific simulator S<sup>D</sup> for the prescribed adversary needs to be defined.

**Definition 3.11 (Dummy Adversary)** *The dummy adversary is an ITM with the following code:*


*(3) If* <sup>D</sup> *is activated by an input id*′ = (*id*Z, *id*D,) *from* Z *with* = (*id*snd, *id*rcv, )*, then* D *passes as output or sends as an outgoing message. Please note:* D *can only use the (local) output tape in the name of id*snd*, if it has previously corrupted this ITI and thus* D *incorporates this ITI.*

The next theorem states, that any protocol is already UC-secure, if it is secure with respect to the dummy adversary.

**Theorem 3.12 (Completeness of the Dummy Adversary)** *Let and be protocols and* D *the dummy adversary. Then emulates in the sense of Definition 3.10 if and only if emulates with respect to the dummy adversary. Formally:*

$$\begin{aligned} \mathsf{V}\,\mathsf{A}\models\mathsf{S}\,\mathsf{V}\,\mathsf{Z}:\,\mathsf{E}\mathsf{X}\mathsf{E}\mathsf{C}\_{\pi,\mathsf{A},\mathsf{Z}}(1^{\mathfrak{n}})\stackrel{\mathsf{C}}{=}\mathsf{E}\mathsf{X}\mathsf{E}\mathsf{C}\_{\phi,\mathsf{S},\mathsf{Z}}(1^{\mathfrak{n}})\\ \mathsf{x}:\mathsf{s} &\qquad\qquad\exists\,\mathsf{S}\_{\mathcal{D}}\,\mathsf{V}\,\mathsf{Z}:\,\mathsf{E}\mathsf{X}\mathsf{E}\mathsf{C}\_{\pi,\mathcal{D},\mathsf{Z}}(1^{\mathfrak{n}})\stackrel{\mathsf{C}}{=}\mathsf{E}\mathsf{X}\mathsf{E}\mathsf{C}\_{\phi,\mathsf{S}\_{\mathcal{D}},\mathsf{Z}}(1^{\mathfrak{n}}) \end{aligned} \tag{3.3}$$

Informally, the dummy adversary is simply a thin communication wrapper around Z and helps Z to access the network. All "adversarial logic" has been put into the environment Z. For a proof, see [Can00; Can05; Can13; Can18].

Before we conclude this section, we re-consider corruption. Both the scope of corruption and the time of corruption can be further restricted. The corruption mechanism as defined in Definition 3.8 allows ITIs belonging to the same party to be corrupted individually. Hofheinz and Shoup show that the UC Composition Theorem as stated in [Can05] does not hold for this general type of corruption, but needs further restrictions. To keep matters simple, we only consider PID-wise corruption from now on. As the name suggests, this means the adversary is allowed to either corrupt no ITI of a party or must corrupt all ITIs at once.

**Definition 3.13 (PID-wise Corruption)** *A UC-experiment* EXEC,A,Z(1 ) *or a system of ITIs* ⟨Z, A⟩ *uses PID-wise corruption, if either all ITIs sharing the same PID are uncorrupted or corrupted.*

Additionally, the corruption model can be distinguished with respect to at what point of time the adversary is allowed to corrupt an ITI.

#### **Definition 3.14 (Static vs. Adaptive Corruption)**


A common misunderstanding is to deem adaptive corruption the more realistic model. The rationale behind this statement is that real programs or computers are usually not initially corrupted. However, this motivation falls short. First note, that UC-security quantifies over all adversarial strategies. This includes adversaries that statically corrupt a party but then follow the prescribed protocol honestly first and may deviate from the protocol later. From the perspective of another honest party this behavior is indistinguishable from a party that is honest first and then corrupted adaptively. Hence, for all scenarios in which security is only guaranteed to permanently honest parties and security for eventually corrupted parties is deemed irrelevant, static corruption is the adequate model.

Instead, adaptive corruption is tightly related to deniability. Simplified, the simulator must simulate protocol messages without knowing the input/output of the honest party in the beginning and then upon corruption (when the simulator learns the party's input/output) contrive appropriate secrets that consistently explain the party's past messages. Typically, this means that the simulator has an algorithm that computes a consistent randomness given the actual past input/output, the transcript of messages and the keys. Then, this randomness is handed over to the environment by the simulator as the pretended randomness of the corrupted party. As the algorithm works for any tuple of input/output, transcript and keys, a modification of this algorithm can also be used by honest parties to plausible deny a particular input/output to any third party.

For the sake of completeness, we shortly define the universal composition operator and state the composition theorem which lends the UC framework its name.

**Definition 3.15 (Universal Composition Operator)** *Let , and be protocols. The protocol*

$$
\rho^{\frac{\pi}{\Psi}} \tag{3.4}
$$

*is identical to with the following modifications:*


**Theorem 3.16 (The UC-Theorem)** *Let , and be protocols and let* ≥UC *hold. Then* ≥UC *holds.*

For a proof see [Can13]. The theorem stated there additionally demands and to be subroutinerespecting. This is implicit here, as Definition 3.4 only allows this kind of protocols. Instead of

 ≥UC one usually writes ≥UC . The following corollary illustrates the most frequent application of Theorem 3.16.

**Corollary 3.17 (UC Composition)** *Let* F *and* G *be ideal functionalities and , be protocols. Then*

$$\rho^{\mathsf{IDEAL}\_{\mathsf{F}}} \gtrsim\_{\mathsf{UC}} \mathsf{IDEAL}\_{\mathcal{G}}, \pi \gtrsim\_{\mathsf{UC}} \mathsf{IDEAL}\_{\mathcal{F}} \implies \rho^{\pi} \gtrsim\_{\mathsf{UC}} \mathsf{IDEAL}\_{\mathcal{G}} \tag{3.5}$$

*or more sloppily*

$$
\varphi^{\mathcal{F}} \succeq\_{\text{UC}} \mathcal{G}, \pi \geq\_{\text{UC}} \mathcal{F} \implies \varphi^{\pi} \geq\_{\text{UC}} \mathcal{G} \tag{3.6}
$$

*holds.*

## **3.4 Communication Model and Anonymity**

As described in the previous section there are two types of channels being hard-coded into the framework:


Both types use the same kind of addressing mechanism: The actual message is prefixed by the extended identity of the sender and the receiver. These extended identities contain the PID, the SID and the code of the respective party. We call this *identity-based addressing*. While this method seems adequate for inner-party (i.e. local) communication and suffices for our purposes, this method is problematic for cross-party (i.e. network) communication. Identitybased addressing does not appropriately capture how addressing is implemented real-world networks and thus does not only fall short to be a realistic model, but also prevents anonymous communication.

There are three related issues:


(3) The communication model of the UC framework implicitly assumes, that the involved parties already have agreed beforehand upon what protocol they are going to run, who has what role using which PID and what SID they are going to use.

Apart from the inability to adequately model real computer networks one might be tempted to argue that the issues (2) and (3) are only of minor concern. As the description of subordinated ITIs is included in their parent's code, the environment Z (implicitly) invokes all ITIs. As Z gives input to the parties and therefore controls their participation in a session, there is no anonymity with respect to Z. Hence, one might say that directly using identities instead of addresses is an acceptable simplification of the model that avoids an additional level of indirection. From this point of view the avoidance of additional network addresses is at most a blemish of the model and one might assume that all the technicalities of real networks such as session setup or address resolution are pulled back into Z. In particular, with respect to issue (2) a receiver could even reply to the correct originator of an anonymous message, if the environment wants so, because the environment knows the sender's identity and could pass the reply address as input to the receiver.⁵ Probably, issues (2) and (3) are part of the reason why common saying states that it is impossible to model privacy within the UC-framework. If the environment Z triggers two parties to interact with each other and then asks the dummy adversary D to report the observed messages, Z knows to whom the messages belong. We claim, however, that this is a misconception of anonymity. The question is whether in the ideal model the simulator S—without using any information about the parties' identities—is able to simulate messages that are indistinguishable from messages that D reports to Z in the real model. If the ideal functionality only outputs non-identifying information to the simulator and the simulator is still able to generate a convincing transcript (from Z's viewpoint), then anonymity is provided. But this is formally impossible due to issue (1). Remember that the dummy adversary in the real experiment receives an extended message = (*id*snd, *id*rcv, ). Even if the simulator was able to simulate the actual message independent of the sender's identity, the simulator still needs to report a convincing extended message containing the sender's identity to Z.

To solve these issues, we explicitly introduce an ideal functionality Fmsg for cross-party communication and completely give up on using the incoming/outgoing messaging that is hard-coded into the UC-framework. Our new messaging functionality Fmsg ensures the anonymity we require. Consequently, our real P5C protocol lives in a Fmsg-hybrid model.

⁵ Of course, the environment could lie about the originator's identity and input the wrong reply address to the recipient, such that the recipient sends its outgoing reply to the wrong destination. However, the network is under control of the adversary anyway and thus providing the recipient with the wrong originator's identity gives no additional power to Z.

As the real dummy adversary and the environment in the real game are aware of Fmsg and thus do not expect to receive the sender's identity, the simulator in the ideal model does not need to provide one neither. Using an ideal functionality allows us to stay in line with the original UC-framework and also makes our requirements on the communication very explicit. (Alternatively, we had to redefine the messaging mechanism which we feel to be the conceptually wrong way.)

Fmsg is a multi-party functionality that supports polynomially many communication subsessions (within one UC session) between pairs of parties. A multi-party functionality that supports multiple sub-sessions allows a distinguished party⁶ to announce its "existence" once in the beginning and from then on can be reached by other parties upon those parties' will. If we modeled Fmsg as a two-party functionality that only supported a single communication instead, a new instance of Fmsg would have to be instantiated for each communication and this would again rise the question how the involved parties "find" each other in the first place. Several formulations for similar functionalities, e.g. Fauth (authenticated communication), Fsmt (secure message transfer), Fscs (secure communication sessions), Frsc (relaxed secure channels), exist in the literature [Can03; Can05; CK02; NMO05]. Fauth provides bi-lateral authentication, but only supports a single (one-shot) message and no confidentiality. Fscs captures the idea that communication is divided into three phases: first, a session is established utilizing some kind of communication identifier, second, several messages are exchanged in both direction and last the communication session is teared down again. However, Fscs does not provide any kind of security. Fsmt provides confidential communication, but only support a single (one-shot) message again.

Our functionality Fmsg can be depicted as a merge of these functionalities and is defined in Figs. 3.3 and 3.4. A party that becomes the *initiator* can establish a communication session that is identified by a *sub-session identifier* (SSID) *ssid* throughout its lifetime. A party that becomes the *responder* of the communication session can then accept the communication. Please note, that the term "responder" does not state anything about the direction of communication. The terms "initiator" and "responder" are only used to distinguish who started the session. Initiators can determine whether they want to stay anonymous or be identified by appropriately setting the mode *mode* ∈ {anon, ident} when they establish a session. A session is always identifying for the responder. After a session has been established, polynomially many messages can be sent in both directions. If both parties are honest, the adversary only learns the length || and direction *dir* ∈ {request, response} of each message. Again, please note, that the terms "request" and "response" shall not stipulate any communication structure, especially requests

⁶ This particular party is the operator in P5C, see later.

### **Functionality** Fmsg

#### *I. State*

A (partial) mapping comm‐state assigning a triple of initiator PID *pid*initiator, responder PID *pid*responder and communication state to a sub-session identifier (SSID) *ssid*:

comm‐state ∶ SSID → PID × PID × {pending, active, closed}

#### *II. Behavior*

	- (a) Draw a fresh sub-session identifier (SSID) *ssid* that has not been used previously.
	- (b) Set comm‐state(*ssid*) ≔ (*pid*initiator, *pid*responder, pending).
	- (c) If *mode* = anon, redefine *pid*initiator ≔ ⊥.
	- (d) Leak (establishing-session,*ssid*, *pid*initiator, *pid*responder) to the adversary.
	- (e) Output (establishing-session,*ssid*, *pid*initiator,*what*) to party *pid*responder.
	- (a) Redefine comm‐state(*ssid*) ≔ (*pid*initiator, *pid*responder, active).
	- (b) Leak/output (accepted,*ssid*) to the adversary and party *pid*initiator.

comm‐state(*ssid*) = (*pid*initiator, *pid*responder, active) and

*pid*snd ∈ {*pid*initiator, *pid*responder} hold, proceed as follows …


(i) Leak (sending,*ssid*, *dir*, ||) to the adversary.

Else (*pid*snd or *pid*rcv is corrupted):


#### **Functionality** Fmsg **(cont.)**

(4) Upon obtaining input (close,*ssid*) from a party with PID *pid*, and there exists *pid*initiator, *pid*responder such that comm‐state(*ssid*) = (*pid*initiator, *pid*responder, active) and *pid* ∈ {*pid*initiator, *pid*responder} hold, proceed as follows …


Figure 3.4: The Functionality Fmsg (cont. from Fig. 3.3)

and responses do not need to come in pairs. They are only used to leak the direction of the message to the adversary without breaking the initiator's anonymity. If one of the parties is corrupted, the adversary learns the whole message and may alter it. Finally, any of the involved parties may close the communication session.

Fmsg provides the following high-level security properties (if both communication partners are honest): Messaging is either bilateral *authenticated* or one-sided authenticated and onesided anonymous. Even if the initiator is anonymous, *integrity* of messages is always ensured. Also, Fmsg ensures that within a single session the initiator is always the same party despite being anonymous. Lastly, messaging with Fmsg is *confidential*.

We conclude this section with some final remarks on the realizability of Fmsg by a real protocol. Of course, Fmsg is trivially unrealizable in the first place due to its anonymity feature (see above). However, this is more of a definitional problem of the UC model. Assume for a moment that we only consider authenticated communication and only use Fmsg with *mode* = ident. Canetti [Can05, Claim 20] shows that Fauth is unrealizable in the plain model. Obviously, Fauth can be realized by Fmsg plus a wrapper protocol that restricts Fmsg to a single sub-session and a single message. In conclusion, Fmsg is also unrealizable in the plain model even in disregard of anonymity.

## **3.5 Setup Assumptions and Writing Conventions**

Security under universal composition is a very strong notion and thus faces impossibility results in the plain model. Especially, Canetti and Fischlin [CF01] show that UC-secure commitments are impossible without setup assumptions. Setup assumptions are modeled as ideal functionalities that are facilitated by the real protocol in order to bootstrap security. In other words, in the real security experiment the protocol is still a hybrid with some components being

#### **Functionality** FCRS

Public parameters:Security parameter , PPT-algorithm Setup

*I. Receive CRS*

Party P input:(retrieve)

(1) If no *crs* has been internally recorded, run *crs* ← Setup(1 ) and store *crs* internally.

(2) Load the internally recorded *crs*.

Party P output:(*crs*)

Figure 3.5: The CRS Functionality FCRS

left idealized. These setup assumptions enable a security proof, because the ideal functionality implementing the setup assumption is a subordinate of the real protocol and thus is not directly accessible by the environment. This provides the simulator in the ideal experiment with a lever to lie about the setup assumption, e.g. to embed a trapdoor, and thereby avoids the impossibility results.

#### **3.5.1 The Common-Reference String Model**

A typical and widely used setup assumption is the CRS (common-reference string) model. A CRS is a short piece of information, i.e. a bit-sequence, that is shared among all parties and has been trustworthily generated. The ideal CRS-functionality FCRS is depicted in Fig. 3.5.

We like to elaborate on the usefulness of the CRS-model. Depending on the way how the CRS is utilized, the model is more or less realistic. For example, following a modular approach where first a real protocol is defined using commitments as idealized UC-functionalities Fcom and then these commitments are replaced by real commitment protocols in the CRS-model using the composition theorem, a fresh instance of FCRS is required for each commitment. This stems from the requirement of the composition theorem that protocols must be subroutine-respecting and that FCRS is local to each commitment. Hence, this approach is highly wasteful on the CRS and it is questionable where a sufficiently long CRS should come from. We stress that it is impossible to generate the CRS with plain cryptographic means by the parties themselves without violating the impossibility result and thus sacrificing security. As the CRS must come from outside the model the CRS should be succinct and efficiently used by the protocol. This applies to our scheme. A single instance of P5C supports polynomial many parties in polynomial many interactions using the same small CRS. In other words, in our particular case

```
Functionality Fbb
I. Register
Party P input: (register, (key1
                                  , key2
                                        , …))
  (1) Send (registering, pidP
                                 , (key1
                                       , key2
                                             , …)) to the adversary.
  (2) Upon receiving OK from the adversary proceed, else abort.
  (3) If another entry (pidP
                              , (… )) has already been registered for pidP
                                                                            , abort.
  (4) If ∃ pidP′ and ,  such that (pidP′
                                         , (… , key′

                                                    , …)) has been recorded and key′

                                                                                        = key
      holds, abort.
  (5) Internally record the pair (pidP
                                        , (key1
                                               , key2
                                                     , …)).
Party P output: (OK)
II. Retrieve
Party P input: (retrieve, pid)
  (1) Look up (pid, (key1
                           , key2
                                 , …)) internally; set (key1
                                                            , key2
                                                                  , …) = ⊥ if no record exists
Party P output: (pid, (key1
                             , key2
                                   , … ))
III. Reverse Retrieve (Partial, reverse lookup)
Party P input: (reverse_retrieve, , key)
  (1) Look up (pid, (key1
                           , key2
                                 , … , key
                                          , … )) with key
                                                           = key internally; set
      (key1
            , key2
                  , …) = ⊥ if no record exists
Party P output: (pid, (key1
                             , key2
                                   , … ))
```
Figure 3.6: The Bulletin Board Functionality Fbb

it is plausible to assume that the CRS is generated by some trustworthy state authority or by some standardization committee beforehand.

### **3.5.2 The Bulletin Board or Key Registration Service**

Moreover, our scheme makes use of a bulletin board Fbb [CSV16, Fig. 3], which is sometimes also referred to as a key registration service [Can07; Bar+04]. A bulletin board can be depicted as a database which associates party identifiers (PIDs) with (cryptographic) public keys. The assumptions about Fbb are that upon registration the operator of the bulletin board checks the identity of the registering party in a trustworthy way and that every party can retrieve information from Fbb trustworthily. Fbb is depicted in Fig. 3.6. We slightly enhanced Fbb over the usual definitions. This modification is not significant and does not have any impact on how Fbb can be realized in principle. The modification is only required for syntactical purposes. Fbb does not only allow a single opaque bit-string to be registered as the only key per party, but a vector of bit strings. This is necessary as the reverse lookup allows to search for a particular component and not only for a complete string. Intuitively, this captures the fact that in complex systems the key is actually a composed key for several building blocks, i.e. one key for a particular instantiation of an encryption scheme, another key for a particular signature scheme, and so on. The reverse lookup allows to identify a party, given only one component of the key. To this end pairwise inequality of keys must not only hold for complete keys, but for every component of a key. Having realistic building blocks in mind this is not a severe restriction.

Fbb is unrealizable in the plain model without authenticated channels, i.e. without another setup-assumption [Can03, Sec. 3.2]. However, the other way around Fbb can also be used to realize authenticated channels Fauth or our messaging functionality Fmsg. In this case, a realworld implementation of Fbb needs to trusted that it correctly verifies the PID of a party outside the model. In our scenario the PIDs of users could be a passport number, SSN, a customer ID or any other reasonable, verifiable and unique attribute. For PoSes the geo-location could be used as a PID.

Again, we like to shortly sketch how Fbb could be implemented in our scenario. Looking ahead, P5C puts us in the lucky situation that the scheme only exhibits a restricted communication pattern. Users only communicate with PoSes and the operator but never with each other. Vice versa, the inverse holds for the PoSes, with the additional benefit that users remain anonymous. Moreover, there is only a unique operator of the system that stays the same all the time and the set of PoSes is rather static. Only the set of users is relatively dynamic. Hence, Fbb could be implemented as a simple list that is locally (and partially) stored at each party and infrequently updated. This frees us from the problem that the bulletin board needs to remotely accessible over an online connection, which itself would require some sort of authentication again. Each PoS only needs to know the key of the operator. This could be set upon deployment of the PoS and updated during maintenance, if necessary. Users need the key of the operator and a list of keys of valid PoSes. Again, this list could be installed/updated at the user's side when the user wallet is issued. Inversely, the operator needs a list of keys of all users and PoSes. But this is not a problem at all, because the operator owns/maintains the PoSes and users must register with the operator which we assume to happen in person. At the bottom line, the security assumption boils down to the ability of the parties to mutually verify their (physical) identities and to exchange their public keys over a local connection (e.g. an NFC reader) when meeting face-to-face without a man-in-the-middle. In summary, we deem this a very mild trust assumption.

#### **3.5.3 Some Writing Conventions**

Lastly, we assume our functionality also uses the implicit writing conventions for ideal functionalities [Can01]. In the real experiment parties need to communicate over the network. Hence, a party that expects to receive a message does usually not proceed nor output anything until the adversary delivers the message. Contrary, ideal functionalities use local input/output and thus normally react immediately per definition. In order to enable indistinguishability, the ideal functionality must provide the simulator with a lever to delay output. Formally, an ideal functionality asks the simulator for permission to pass output to a party. To this end the ideal functionality sends its a suitable request to the simulator. This request does not contain the actual output (which remains secret) but is equipped with a unique "output ID" that uniquely identifies this output. When the simulator replies with the same output ID, the associated output is eventually passed to the party. The output IDs also allow to re-order outputs to some extent. For example, if a sender broadcasts a message to several recipients in the real experiment and the ideal functionality passes output to the same set of recipients, the simulator must be able to re-order the sequence of outputs in correspondence to the order of delivered messages in the real experiment.

As this entails a lot of boilerplate code that does not provide any insight, we simply assume this mechanism to be implicit to all ideal functionalities and that they "just do the right thing". In particular, our simulator can delay outputs and abort the current tasks of the ideal functionality at any point.

## **4 System Definition**

In this chapter we formally define our ideal functionality Fapc. Usually, ideal functionalities are rather simple objects and immediately evince that they capture the "right" definition of security. At least this is true for ideal functionalities that define cryptographic primitives like commitments, encryption or oblivious transfer. But here, Fapc defines security and privacy for a complex, real-world system and is almost a protocol on its own. Therefore, we also try to motivate why the definition is the way it is and why seemingly "insecure"¹ choices are the best we can hope for. An explicit mapping of the properties identified in Section 2.6 onto the ideal model is given in Chapter 5. A summary of the used variables is listed in the appendix as a quick reference.

We do not formalize each task (e.g., IssueWallet, Deposit, …) as an individual ideal functionality, but the whole system as a monolithic, highly reactive functionality Fapc with polynomially many parties as users and PoSes. A monolithic functionality allows for a shared state between the individual interactions and to define correctness, security and privacy more easily. We will therefore first explain this state in Section 4.1 before we go on to describe the behavior of Fapc. The ideal function Fapc provides twelve different tasks in total which we divide up into three categories: "Setup Tasks" (comprising all party registration and CertifyPOS) are defined in Section 4.2. "Main Tasks" (IssueWallet, Deposit and Disburse) are defined in Section 4.3. Finally, "Utility Tasks" (ProveParticipation, DetectDS, VerifyGuilt and BlacklistWallet) are covered in Section 4.4.

## **4.1 The Internal State**

The key idea of Fapc is to internally keep track of all conducted transactions in a pervasive transaction database *TRDB* (see Fig. 4.1). Note that in this case "transaction" refers to the tasks IssueWallet, Deposit or Disburse, not just Deposit. Each transaction entry *trdb* ∈ *TRDB* is of the form

$$\text{tr}db = (\text{s}^{\text{prev}}, \text{s}, \rho, \text{x}, \lambda, \text{pid}\_{\text{fl}}, \text{pid}\_{\text{p}}, \text{p}, b, \omega\_{\text{ds}}, \omega\_{\text{rc}}, \omega\_{\text{pp}}). \tag{4.1}$$

¹ N.b.: The ideal functionality cannot be insecure per definitionem. However, it could capture a concept of security that does not coincide with the intuitive perception of security.

### **Functionality** Fapc

*I. State*

• Set *TRDB* = {*trdb*} of transactions

$$\begin{split} trdb &= (\mathsf{s}^{\mathsf{prev}}, \mathsf{s}, \varphi, \mathsf{x}, \lambda, \mathsf{p}id\_{\mathcal{U}}, \mathsf{p}id\_{\mathcal{P}}, \mathsf{p}, b, \omega\_{\mathsf{ds}}, \omega\_{\mathsf{tc}}, \omega\_{\mathsf{pp}}) \\ &\in \mathcal{S} \times \mathcal{S} \times \Phi \times \mathbb{N}\_{0} \times \mathcal{L} \times \mathcal{P}\mathcal{T}\mathcal{D}\_{\mathcal{U}} \times \mathcal{P}\mathcal{T}\mathcal{D}\_{\mathcal{P}} \times \mathbb{Z}\_{\mathsf{p}} \times \mathbb{Z}\_{\mathsf{p}} \\ &\qquad \times \{0, 1\}^{\*} \times \{0, 1\}^{\*} \times \{0, 1\}^{\*}. \end{split}$$

• A (partial, injective) mapping assigning a fraud-detection ID to a pair of wallet ID and counter :

$$f\_{\Phi}: \mathcal{L} \times \mathbb{N}\_0 \to \Phi, (\lambda, x) \mapsto \varphi$$

• A (partial) mapping A<sup>U</sup> assigning user attributes to a wallet ID :

$$f\_{\mathcal{H}\_{\mathcal{U}}} : \mathcal{L} \to \mathcal{A}\_{\mathcal{U}}, \lambda \mapsto a\_{\mathcal{U}}$$

• A (partial) mapping A<sup>P</sup> assigning PoS attributes to a PoS PID *pid*<sup>P</sup> :

$$f\_{\mathcal{R}\_{\mathcal{P}}} : \mathcal{P}T\mathcal{O}\_{\mathcal{P}} \to \mathcal{A}\_{\mathcal{P}}, \operatorname{pid}\_{\mathcal{P}} \mapsto a\_{\mathcal{P}}.$$

• A (partial) mapping assigning a validity bit to a pair of user PID *pid*<sup>U</sup> and proof of guilt :

$$f\_{\pi}: \mathcal{PTO}\_{\mathcal{U}} \times \Pi \to \{\mathsf{OK}, \mathsf{Nock}\}$$

• A injective mapping bl assigning a blacklisting tag bl to a wallet ID :

$$f\_{\Omega\_{\mathtt{bl}}}: \mathcal{L} \to \Omega\_{\mathtt{bl}}, \lambda \mapsto \omega\_{\mathtt{bl}}$$

#### *II. Behavior*


Figure 4.1: The Functionality Fapc – Internal State and Overview of Tasks

Figure 4.2: An entry *trdb* ∈ *TRDB* visualized as an element of a directed graph

It contains the identities *pid*<sup>U</sup> and *pid*<sup>P</sup> of the involved user and PoS (or operator in the case of IssueWallet and Disburse), the wallet ID of the wallet that was used as well as the price associated with this particular transaction and the total balance of the user's wallet after this transaction, i.e., the accumulated sum of all prices so far including the current transaction. In other words, Fapc implements a trustworthy global bookkeeping service that manages the wallets of all users. Each transaction entry is uniquely identified by a serial number and links via prev to the previous transaction *trdb*prev which corresponds to the wallet state prior to *trdb*. Additionally, each entry contains a counter indicating the number of subsequent transactions of a wallet since its generation, i.e. = prev + 1 always holds, and a fraud-detection ID which is required for double-spending detection.

The database *TRDB* can best be visualized as a directed graph with each *trdb* entry representing a node together with an edge pointing to its predecessor (see Fig. 4.2 for a depiction). Each node represents the state of a wallet *after* the respective transaction, i.e., at the end of an execution of IssueWallet, Deposit or Disburse, and the edges correspond to the transition from the previous to the next state. Nodes are identified by serial numbers and additionally labeled with (, , , *pid*<sup>U</sup> , ). The edge to the predecessor is identified by (prev, ) and additionally labeled with (*pid*<sup>P</sup> , , ds, rc, pp).

Also, each transaction, or more precisely each transition from one wallet state to the next, is associated with various tags: the double-spending tag ds, the recalculation tag rc and the prove-participation tag pp. A forth tag, the blacklisting tag bl, is not depicted here. The latter is only generated, when a wallet is issued and thus belongs to the "imaginary transition" from the void to the root node. Therefore, bl is not recorded in *trdb* but separately kept by Fapc in the map bl (cp. Fig. 4.1). In short, these tags serve as a kind of receipt or "evidence" for certain aspects of a transaction and store implementation-specific information. Their description is postponed to Section 4.1.2. But first some explanations are in order with respect to the different IDs that are attached to each transaction, namely the serial number , the wallet ID and the fraud-detection ID .

#### **4.1.1 Transaction Identifiers**

In a truly ideal world, Fapc would use the user's identity *pid*<sup>U</sup> and a wallet ID to look up its most recent entry in the database and append a new entry. Such a scheme, however, could only be implemented by an online system. Since we require offline capabilities—allowing a user and PoS to interact without the help of other parties and without permanent access to a central authority—the inherent restrictions of such a setting must be reflected in the ideal model:


In order to accurately define security, these technicalities have to be incorporated into Fapc, which causes the bookkeeping to be more involved.

To ease the upcoming definition of Fapc we bring forward some properties of the transaction database *TRDB*. In Section 5.1 we show that *TRDB* is a directed forest with labels as described above and prove some invariants. But there we reverse the train of thought, take the definition of Fapc as a starting point and then prove that Fapc actually yields a graph with the desired properties. Here, we start from our goal and describe our intention behind the transaction database to enable an intuitive understanding.

A user's wallet is represented by the subgraph of all nodes with the same wallet ID and forms a tree inside the forest (see Fig. 4.3). If a new wallet is issued, IssueWallet creates a new entry of the form

$$(\bot, \mathbf{s}, \not{\varphi}, \mathbf{0}, \lambda, \not{\mathrm{p}} \mathrm{id}\_{\mathcal{U}}, \mathsf{p} \mathrm{id}\_{\mathcal{O}}, \mathbf{0}, \mathbf{0}, \mathsf{a}\_{\mathrm{ds}}, \mathsf{a}\_{\mathrm{rc}}, \mathsf{a}\_{\mathrm{pp}}).\tag{4.2}$$

These transactions have no predecessor and are root nodes of new wallets. Therefore, prev = ⊥ and also = 0 holds. Deposit and Disburse extend a tree. The task Disburse clears a wallet's balance and the corresponding entries are leaf nodes of their tree with the restricted form

$$\left(\mathbf{s}^{\text{prev}}, \mathbf{s}, \boldsymbol{\varrho}, \mathbf{x}, \lambda, \text{pid}\_{\mathcal{U}}, \text{pid}\_{\mathcal{O}}, -b^{\text{fill}}, \mathbf{0}, \omega\_{\text{ds}}, \omega\_{\text{rc}}, \omega\_{\text{pp}}\right) \tag{4.3}$$

Every other task than IssueWallet, Deposit and Disburse does not alter *TRDB* but only queries it.

Unless a user commits double-spending with a wallet the particular subgraph is a linked, linear list. If a user misbehaves and reuses an old wallet state (i.e., there are edges (prev, ) and (prev, ′ )), the corresponding subgraph becomes a directed tree. The counter equals the depth of a node in a tree and the fraud-detection ID is an injective, random function ∶ L × ℕ<sup>0</sup> → , (, ) ↦ of the pair (, ) of wallet ID and counter. If and only if a node is not part of a double-spending, the pair (, ) is globally unique and therefore . Otherwise, all transaction entries that constitute a double-spending, i.e., all nodes with the same predecessor, share the same counter value and the same fraud-detection ID .

Although the database *TRDB* and the mapping contains most of the required information, Fapc stores four more partially defined mappings. A<sup>U</sup> ∶ L → A<sup>U</sup> and A<sup>P</sup> ∶ PID<sup>P</sup> → A<sup>P</sup>

Wallets that belong to the same user a grouped by a dashed line. The serial number is *globally unique* for a node. Nodes that belong to the same tree (aka wallet) share the same wallet ID (here: 1, 3 or 6). The counter equals the depth of a node. The fraud-detection ID is an injective, random function of the pair (, ). Unless double-spending occurred, (, ) is also *globally unique* and thus , too. Nodes that belong to the same double-spending sharing the same fraud-detection ID are grouped by a dotted line.

Figure 4.3: The transaction database *TRDB* visualized as a directed forest

keeps tracks of parties' attributes by internally storing PoS attributes <sup>P</sup> upon certification and user attributes <sup>U</sup> when a wallet is issued. The mapping keeps track of proofs of guilt that are issued or queried in the context of double-spending detection. The already mentioned mapping bl keeps track of blacklisting tags that are generated when a new wallet is issued.

#### **4.1.2 Tags and the Synchronization of State**

As already briefly touched in the introduction of this section each transaction is associated to a collection of so-called "tags" that are stored in the transaction database alongside the actual information about the transaction. Their common characteristic is that they serve as a sort of digital receipt and each type of tag goes with one of the utility tasks:


The same that is being said about the different identifiers in Section 4.1.1 also applies to the tags. In a truly ideal world, none of these tags would be required and Fapc could be defined without them. For example, in order to prove participation in a particular transaction the user and violation enforcer could simply input the whereabouts of the transaction into Fapc and Fapc merely uses its global transaction database to undoubtedly acknowledge (or deny) if such a transaction has been recorded. Similarly, in order to recalculate the balance of a particular wallet, the operator could input the wallet ID into Fapc and Fapc easily sums over the prices for all recorded transactions with that wallet ID. Unfortunately, this would be an overly idealized model and could not be realized, at least not by a system in that information is stored in a decentralized way with offline capabilities. In such a system inconsistencies naturally arise. For example, when a user and a PoS have completed a transaction but the PoS has not yet sent the accounting information to the operator, the operator incapable of considering this transaction when a balance needs to be recalculated. These technicalities must be accurately modeled by the ideal functionality to let the security proof go through. An alternative approach instead of using tags is discussed in Section 5.4.1. That section also points out some of the errors in [Nag+20].

We stress, that the ideal model does not stipulate how theses tags look like nor what they encode. These details are left to the eventual realization. From the perspective of Fapc these tags are treated as opaque bit strings that are "placeholders" to be filled out by the simulator in the security proof. The ideal functionality uses the tags only in so far to "flag" which transaction is known by which party. For Fapc the global transaction database *TRDB* is the only authoritative source of information.

Typically, the tags are only passed through as output to a party and later re-input. This raises the problem that the environment can also input tags which have never been output by any task of the ideal functionality before but are made-up by the environment. We coin the following terms.

**Definition 4.1 (Genuine vs. Fake Tags)** *If a tag is input to a task of* Fapc *by any party and the tag has been output before, we call it a* genuine *tag. A tag that is not genuine is called a* fake *tag.*

Skipping ahead to the definition of all tasks of Fapc a tag is genuine, if and only if it is recorded in *TRDB* (in case of ds, pp, rc) or if −1 bl (bl) is defined (in case of bl). In other words, all tasks of Fapc are defined such that they never output a tag without recording it in the internal state.

## **4.2 Setup Tasks**

To set up the system two things are required: All parties—the dispute resolver, operator, PoS and users—have to register to be able to participate in the toll collection system. As all of these registration tasks are similar, we will not describe them separately. In addition, PoSes needs to be certified by the operator.

### **4.2.1 Registrations**

The tasks of RegisterDR, RegisterOp, RegisterPOS and RegisterUser (cp. Figs. 4.4 to 4.7) are straightforward and analogous. Upon invocation by the respective party through the input register, these tasks notify the adversary about the registration. This model that the information whether a particular party participates in the system is public. In case of the operator, the respective task additionally receives an attribute vector <sup>O</sup> which defines the PoS-attributes

### **Functionality** Fapc **(cont.) – Task** RegisterDR

Dispute resolver input:(register)

(1) Leak (registering\_dr, *pidDR*) to the adversary.

Dispute resolver output:(registered)

Figure 4.4: The Functionality Fapc (cont. from Fig. 4.1) – Task RegisterDR

**Functionality** Fapc **(cont.) – Task** RegisterOp

Operator input:(register, O)


Operator output:(registered)

Figure 4.5: The Functionality Fapc (cont. from Fig. 4.1) – Task RegisterOp

### **Functionality** Fapc **(cont.) – Task** RegisterPOS

PoS input: (register) (1) Leak (registering\_pos, *pid*<sup>P</sup> ) to the adversary. PoS output:(registered)

Figure 4.6: The Functionality Fapc (cont. from Fig. 4.1) – Task RegisterPOS

### **Functionality** Fapc **(cont.) – Task** RegisterUser

User input:(register)

(1) Leak (registering\_user, *pid*<sup>U</sup> ) to the adversary.

User output:(registered)

Figure 4.7: The Functionality Fapc (cont. from Fig. 4.1) – Task RegisterUser

**Functionality** Fapc **(cont.) – Task** CertifyPOS PoS input: (certify\_pos) (1) If RegisterOp or RegisterPOS have not yet been run, output ⊥ and abort. Operator output: (certifying\_pos, *pid*<sup>P</sup> ) Operator input: (certifying\_pos, P) (2) Leak (certifying\_pos, *pid*<sup>P</sup> , P) to the adversary. (3) Set A<sup>P</sup> (*pid*<sup>P</sup> ) ≔ P. PoS output: (certified\_pos, P) Operator output:(certified\_pos)

Figure 4.8: The Functionality Fapc (cont. from Fig. 4.1) – Task CertifyPOS

that are used by the operator when acting like a PoS, for example in the tasks IssueWallet and Disburse. For a discussion of the attributes see Section 2.4.

#### **4.2.2 Point-of-Sale Certification**

CertifyPOS (cp. Fig. 4.8) is a two-party task between the operator and an PoS in which the PoS is assigned an attribute vector P. Again, we refer to Section 2.4 for a discussion on these attributes. The attribute vector <sup>P</sup> is chosen by the operator after it has learned the PoS' identity, while the PoS only inputs its desire to be certified. Fapc (re-)defines the partial mapping A<sup>P</sup> (*pid*<sup>P</sup> ) ≔ <sup>P</sup> which internally stores all PoS attributes. The identity *pid*<sup>P</sup> and attributes <sup>P</sup> are leaked to the adversary before the attributes are output to the PoS.

Please note, that the proposed definition of CertifyPOS is extremely simple and enables effects that are probably undesirable in a real-world application. In order to model re-certification of a PoS Fapc allows to overwrite A<sup>P</sup> and thereby annihilate a previous version of the attributes. Skipping ahead, let's assume that the number of points a user gains in Deposit depends on A<sup>P</sup> . Further assume that the balance of a user's wallet is recalculated by the operator at some later point of time and that the PoS' attributes have been changed in the meantime. In this case the re-calculated balance and the balance that is stored in the wallet won't match. This is even true, if all parties have been honest.

To remedy this problem in the ideal model, we would need to introduce a sequence of "versions" of A<sup>P</sup> for a version counter and log the temporal order of all transactions, i.e. equip each transaction with to indicate which version has been in effect at the time of the transaction. This would complicate the ideal functionality even more. Also note, that a secure realization would be quite involved, too. It would not suffice, if the operator locally stored a

history of all past certifications, because a malicious operator had the power to lie about it. If a consistent re-calculation of the balance under intermittently changing attributes was one of the security properties, a secure realization would at least require that the attributes are irreversible logged in a publicly verifiable way, before they become effective. To keep matters simple, we ignore this problem.

## **4.3 Main Tasks**

Now we describe the main tasks one would expect from any anonymous point collection system: IssueWallet, Deposit and Disburse. As mentioned before, these are the only tasks in which transaction entries are created.

#### **4.3.1 Wallet Issuing**

IssueWallet (cp. Fig. 4.9) is a two-party task between a user and the operator in which a new wallet is created for the user. After the operator has learned the user's identity, the operator inputs an attribute vector U. The operator is free to abort at this point, if users shall not obtain a new wallet, e.g. because they have been identified as fraudsters in a previous run of DetectDS. First, Fapc randomly picks a (previously unused) serial number for the new transaction entry *trdb*. A new wallet ID and fraud-detection ID are uniquely and randomly picked, unless the user is corrupted in which case the adversary chooses . This may infringe upon the unlinkability of the user's transactions and we do not give any privacy guarantees for corrupted users. Finally, a transaction entry

$$trdb := (\bot, \mathbf{s}, \not{\mathbf{0}}, \not{0}, \not{\lambda}, \not{p}id\_{\mathcal{U}}, \not{p}id\_{\mathcal{O}}, \mathbf{0}, \mathbf{0}, \bot, \bot, \bot) \tag{4.4}$$

corresponding to the new wallet is stored in *TRDB* and the wallet's attributes A<sup>U</sup> () ≔ U are appended to the partial mapping A<sup>U</sup> . Fapc asks the adversary to provide a blacklisting tag bl which internally recorded as being associated to the wallet through the partial mapping bl . The blacklisting tag is re-used in the utility task BlacklistWallet. Both parties get the serial number as output. The user also receives the attribute vector <sup>U</sup> to check out-of-band that it has been assigned correctly and more importantly does not contain any identifying information. The operator receives the blacklisting tag bl.

We stress that Fapc does not really use the blacklisting tag bl, but only passes it through. For a discussion of the tags see Section 4.1.2.

**Functionality** Fapc **(cont.) – Task** IssueWallet User input: (issue\_wallet) (1) If RegisterOp, RegisterUser or RegisterDR have not yet been run, output ⊥ and abort. Operator output: (issuing\_wallet, *pid*<sup>U</sup> ) Operator input: (issuing\_wallet, U) (2) Pick serial number <sup>R</sup>← and wallet ID <sup>R</sup>← <sup>L</sup> that has not previously been used. (3) If *pid*<sup>U</sup> ∈ PIDcorr or *pid*<sup>O</sup> ∈ PIDcorr, leak (issuing\_wallet, , U) to the adversary.*ᵃ* (4) Pick fraud-detection ID <sup>R</sup>← that has not previously been used, or—if *pid*<sup>U</sup> ∈ PIDcorr—leak (issuing\_wallet) to the adversary and ask for frauddetection ID that has not previously been used.*ᵇ* (5) Append (, 0) ≔ to . (6) Leak (issuing\_wallet) to the adversary and ask for a blacklisting tag bl that has not previously been used. (7) Append *trdb* ≔ (⊥, , , 0, , *pid*<sup>U</sup> , *pid*<sup>O</sup> , 0, 0, ⊥, ⊥, ⊥) to *TRDB* (8) Set A<sup>U</sup> () ≔ U. (9) Set bl () ≔ bl. User output: (issued\_wallet, , U) Operator output:(issued\_wallet, , bl)

Figure 4.9: The Functionality Fapc (cont. from Fig. 4.1) – Task IssueWallet

*ᵃ* N.b., this leakage does not weaken the "actual" security at all. The serial number is output to both parties (see below), and the attribute vector <sup>U</sup> is input by the operator and output to the user. Hence, if any of these parties is corrupted, the adversary learns this information anyway. This early leakage ahead of time is only a concession to the final implementation to enable the simulation of messages in the correct order.

*ᵇ* Picking the upcoming fraud-detection IDs randomly asserts untrackability for honest users. For corrupted users, we do not (and cannot) provide such a guarantee and the fraud-detection ID might be chosen adversarially (cp. text body).

#### **4.3.2 Deposition**

This two-party task (cp. Figs. 4.10 and 4.11) is conducted whenever a user interacts with a PoS and serves the main purpose of depositing points on a user's wallet.

This task is by far the most complicated and it is not straightforward to see why it captures a sane definition of security. For the ease of presentation, we first describe the behavior of Fapc in the completely honest case without misbehaving, i.e. all parties (user, PoS and operator) are honest, and the user is neither blacklisted nor commits double-spending. After that we describe the restrictions and conditional branches of code which are required to obtain a definition that is actually realizable under corruption in our setting. Please note, although the operator seems not to be immediately involved in the task Deposit as a participating party, the definition still depends on the corruption status of the operator. Remember that within a single instantiation of Fapc polynomial many parties can interact within polynomial many tasks and thus the operator is implicitly involved. This has been one of the oversights in [Nag+20].

To start a deposition of points, users input a serial number prev, indicating which past wallet state they wish to use and the identity of the PoS they want to interact with. Of course, wellbehaving users always use the most recent state of a wallet. The participating PoS in turn inputs a blacklist *bl* of fraud-detection IDs.

Firstly, <sup>F</sup>apc looks up if a wallet state *trdb*prev in *TRDB* corresponds to the provided serial number prev and belongs to the correct user with PID *pid*<sup>U</sup> . This guarantees that users can only deposit points on a wallet which has been legitimately issued to them. The ideal functionality uses part of the information from the previous wallet state

$$
\hbar t d b^{\rm prev} = (\cdot, \text{s}^{\rm prev}, \rho^{\rm prev}, x^{\rm prev}, \lambda^{\rm prev}, \text{piv}\_{\mathcal{Q}\mathcal{I}}, \text{pid}\_{\mathcal{P}}^{\rm prev}, \cdot, b^{\rm prev}, \cdot, \cdot, \cdot) \tag{4.5}
$$

to determine those parts of the new transaction entry *trdb* which remain constant for transactions within the same wallet. Fapc randomly picks a fresh serial number for the upcoming transaction, the user PID *pid*<sup>U</sup> and wallet ID stay the same, *pid*<sup>P</sup> is set to the identity of the participating PoS and the transaction counter ( ≔ prev + 1) is increased by one. In the completely honest case without misbehaving, the map (, ) is always undefined (cp. Step 6 in Fig. 4.10). Fapc ties a fresh, uniformly and independently drawn fraud-detection ID ((, ) ↦ ) to the 'th transaction of the wallet . This fraud-detection ID is checked against the blacklist *bl*. Note, that the probability to blacklist a freshly drawn fraud-detection ID is negligible. Moreover, Fapc looks up the user's attributes bound to this particular wallet (<sup>U</sup> ≔ A<sup>U</sup> ()) and the attributes of the current and previous PoS (<sup>P</sup> ≔ A<sup>P</sup> (*pid*<sup>P</sup> ), prev <sup>P</sup> ≔ A<sup>P</sup> (*pid*prev P )). The current serial number , the current fraud-detection ID together with the attributes of the user and the previous PoS are output to the PoS which chooses the price of this transaction. We refer the reader to Section 2.4 for a justification why the PoS chooses the price unilaterally.

<sup>(</sup>see below), and the attribute vector <sup>U</sup> is input by the operator and output to the user. Hence, if any of these parties is corrupted, the adversary learns this information anyway. This early leakage ahead of time is only a concession to the final implementation to enable the simulation of messages in the correct order.

Figure 4.10: The Functionality Fapc (cont. from Fig. 4.1) – Task Deposit, Part 1

*ᵇ* This unveils the user's identity, but we do not guarantee that for double-spenders (cp. text body).

*ᶜ* Picking the upcoming fraud-detection IDs randomly asserts untrackability for honest users. For corrupted users, we do not (and cannot) provide such a guarantee and the fraud-detection ID might be chosen adversarially (cp. text body).

#### **Functionality** Fapc **(cont.) – Task** Deposit**, Part 2** PoS input: (depositing, ) (9) ≔ prev + . (10) If O ∉ PIDcorr, leak (depositing, , , *pid*<sup>P</sup> ) to the adversary,*ᵃ* else leak (depositing, , , *pid*<sup>P</sup> , ) to the adversary,*ᵇ* and (in both cases) ask for tags (ds, rc, pp) that have not previously been used, or—if *pid*<sup>P</sup> ∈ PIDcorr—also accept a non-unique ds.*ᶜ* (11) Append (prev, , , , , *pid*<sup>U</sup> , *pid*<sup>P</sup> , , , ds, rc, pp) to *TRDB*. User output:(deposited, , P, , , pp)

PoS output:(deposited, ds, rc, pp)

*ᵇ* N.b., the corrupted operator eventually collects all recalculation tags and thus the adversary learns the price.

*ᶜ* If the PoS is corrupted, we cannot guarantee double-spending detection.

Figure 4.11: The Functionality Fapc (cont. from Fig. 4.1) – Task Deposit, Part 2

The new balance is calculated and the adversary is asked to provide a double-spending tag ds, a recalculation tag rc and a prove-participation tag pp. These tags may depend on the current serial number , the current fraud-detection ID and the identity of the involved PoS *pid*<sup>P</sup> . We stress that both IDs (, ) have been freshly and uniformly drawn and thus do not unveil anything useful information-theoretically. Finally, the new transaction record *trdb* is stored in *TRDB*. Note that all information leading to the new wallet state except for the price and the tags (ds, rc, pp) came from data internally stored in Fapc itself and can therefore not be compromised. The serial number , the current PoS' attributes P, the price and the updated balance are output to the users so they may check they received the expected amount of points. As before, Fapc does not really use the tags, but only records and outputs them again. For a discussion of the tags see Section 4.1.2.

We now discuss the omitted cases of the task Deposit, which deal with corruption or misbehavior.

If in Step 6 the map (, ) is undefined as above, but the user is corrupted, the frauddetection ID is not independently and uniformly drawn, but chosen adversarially. Similar to the task IssueWallet (cp. Section 4.3.1), this may infringe upon the unlinkability of the user's transactions. But we do not give any such guarantees for corrupted users. Also, this may lead to a premature abort (cp. Step 7), if the adversary chooses a ∈ *bl* which is blacklisted.

*ᵃ* N.b., for honest users is a mere random number. The identity of the PoS cannot remain secret, because even an honest user might eventually be asked by the violation enforcer to prove its participation in this transaction.

In Step 6 the map (, ) will be defined, if the user commits double-spending or if frauddetection IDs have been precalculated and prefilled for blacklisting purposes (cp. Section 4.4.2). Remember, the wallet ID and transaction counter corresponds to the tree and depth of a transaction node. In this, case Fapc assigns the same fraud-detection ID to the current transaction record in order to preserve consistency. Also, the set of all previous double-spending tags ds which have been recorded for transactions of the same wallet and transaction counter together with the user's identity are leaked to the adversary. The adversary replies with a proof of guilt which is internally recorded by Fapc.

The latter two steps fix an oversight in [Nag+20]. To understand them we need to skip ahead. The proof of guilt is some kind of digital "evidence" which allows the operator to prove to any other party that the particular user is a fraudster and has committed double-spending. To enforce soundness, consistency and protection against false accusation of innocent users, Fapc internally manages the map which records pairs of user identities and proofs of guilt. Usually, these proofs of guilt are not directly obtained via Deposit in case of a double-spending, but their generation is deferred to the utility task DetectDS (cp. Section 4.4.1). The validity of the proofs of guilt can be checked by any party using VerifyGuilt (cp. Section 4.4.1). This three-step approach is necessary, because misbehaving users might commit double-spending at different PoSes which are offline and only after these PoSes have synchronized their state with the operator, the operator is actually capable of detecting double-spending later. However, in a real implementation (at least in our implementation) the environment which guides all parties can create a valid proof of guilt as soon as a (even honest) user has committed doublespending. The environment simply runs the real code of DetectDS in its own head without actual calling DetectDS. The proof of guilt is perfectly valid and does not contradict the protection against false accusation (because the user has indeed committed double-spending), but has nevertheless not been output by DetectDS. On a high-level the environment can do so, because it instantly knows which users commit double-spending and does not depend on a periodically synchronization of information. To enable consistent replies by VerifyGuilt for these yet valid and legitimate but not system-generated proofs, Fapc also needs to generate such a proof of guilt at the very moment when double-spending occurs. We stress that (1) this proof is not being output but only kept internally (direct output would preclude offline capabilities) and (2) misbehaving users immediately loose their anonymity as soon as they commit doublespending and not until DetectDS is invoked. This has been overlooked in [Nag+20] as the synchronization of state has not formally been defined but explained on a hand-waving level.

Lastly, if the operator is corrupted Fapc will not only leak (, , *pid*<sup>P</sup> ) to the adversary but also the price (cp. Step 10 in Fig. 4.11). Opposed to the leakage of the random numbers , , this might lead to an actual loss of unlinkability,² but it is the best we can hope for, if the operator is corrupted. Although we do not stipulate how a recalculation tag rc looks like, because this is specific to the implementation, the recalculation tag enables the operator to recalculate the balance of a wallet, i.e. it acts like a digital invoice (see also RecalculateBalance in Section 4.4.3). Hence, it is quite reasonable that this tag encodes the price of a transaction among other things. If the operator is corrupted, the price cannot be kept secret from the adversary. Admittedly, the additional leakage of the price is a rather small detail, but again, has been overlooked in [Nag+20] as the synchronization of state has not spelled out and thus has not been part of the simulation-based proof.

#### **4.3.3 Disbursement**

As Disburse (cp. Fig. 4.12) is very similar to the task of Deposit, we will refrain from describing it again in full detail but rather just highlight the differences to Deposit.

Please remind, that Disburse is designed with the post-payment scenario from Section 2.3.3 in mind. In other words, disbursement of points means to clear the recent balance and invalidate the wallet. Alternative definitions are discussed below.

The first difference is that it is conducted with the operator rather than an PoS and no blacklist is taken as input as we do not want to prevent any user from clearing the balance. Disburse is identifying for the users to allow the operator to invoice them and check if they (physically) pay the correct amount. Also, the users do not obtain a new serial number as part of their output, because the transaction entry is supposed to be a leaf node. Nonetheless, a serial number is internally drawn and associated with the transaction. Instead of obtaining a price from the operator the recent balance is used and the price ≔ −bill is part of the output to the operator. The prove-participation tag is omitted, but double-spending tag and recalculation tag are still kept, because Disburse is identifying for the user and hence there is no point in a separate prove-participation tag. Also note, that the leakage to the adversary is asymmetric to the leakage in Deposit and does not consider an extra case for a corrupted operator, because the operator directly participates in Disburse and learns the current price = −bill as part of the output (cp. Step 7 in Fig. 4.12 vs. Step 10 in Fig. 4.11).

**Alternative Definitions** To realize the other scenarios from Sections 2.3.1 to 2.3.3 Disburse could be modified in several aspects. Each of these options make Disburse more similar to Deposit:

² Of course, this depends on the pricing model. If there is only a single, constant price, the additional leakage does not bear any information.


## **4.4 Utility Tasks**

To obtain a feature complete anonymous point collection system we also provide the utility tasks DetectDS, VerifyGuilt, BlacklistWallet and ProveParticipation. All of those tasks deal with different aspects arising from fraudulent user behavior.

#### **4.4.1 Double-Spending Detection and Guilt Verification**

Due to our requirement to allow offline PoSes, misbehaving users are able to fraudulently deposit points on outdated states of their wallets. This double-spending cannot be prevented but must be detected afterwards. To ensure this, Fapc provides the tasks DetectDS (cp. Fig. 4.13) and VerifyGuilt (cp. Fig. 4.14).

DetectDS is a one-party task executed by the operator and takes two double-spending tags ds, ′ ds as input. Again, we first describe the case in which all parties are honest and in which the environment only inputs genuine double-spending tags, i.e. tags that have been output by Deposit or Disburse before (cp. Definition 4.1).

First <sup>F</sup>apc looks up the corresponding transaction nodes *trdb*, *trdb*′ . These exist, because ds, ′ ds are genuine. The condition in Step 2 in Fig. 4.13 simplifies to the question whether the fraud-detection IDs , ′ match or not. If they are not equal, the given double-spending tags do not belong to transactions that have a common predecessor, in other words the doublespending tags do not attest double-spending. In this case the adversary is asked for a user identity *pid*<sup>U</sup> , a proof of guilt and a result bit *result*. However, the result bit it not used at

Figure 4.13: The Functionality Fapc (cont. from Fig. 4.1) – Task DetectDS

#### **Functionality** Fapc **(cont.) – Task** VerifyGuilt

Party input: (verify\_guilt, *pid*<sup>U</sup> , )


Party output:(verified\_guilt,*result*)

Figure 4.14: The Functionality Fapc (cont. from Fig. 4.1) – Task VerifyGuilt

all, but Fapc unconditionally returns the invalid output (⊥, ⊥). Note, we assume the user to be honest, i.e. *pid*<sup>U</sup> ∉ PIDcorr holds. This branch asserts protection against false accusation for honest users. If the fraud-detection IDs are equal, there has indeed been a double-spending incident. In this case the user's identity is leaked to the adversary and the adversary is asked to provide a proof of guilt . This proof of guilt is then recorded as being valid for the fraudulent user. This branch asserts completeness.

We now discuss the remaining corner cases. If no transaction exists for at least one of the given double-spending tags, i.e. at least one of the double-spending tags is a fake tag, the first branch is applied and the adversary may decide about the user and the result. Also, if the denoted user is corrupted, the result is adopted unaltered. This may result into a valid proof of guilt although the user has not committed double-spending, but protection against false accusation is not guaranteed for corrupted users. Moreover, if (*pid*<sup>U</sup> , ) has already been defined, the result is not changed. This asserts consistency across multiple invocation and a proof of guilt that has been invalid for a particular user cannot spontaneously become valid and vice versa.

Double-spending detection is complemented by VerifyGuilt. It is also a one-party task but can be performed by any party. To put it simply, VerifyGuilt checks if the given proof of guilt is internally recorded as being valid for the particular user ID *pid*<sup>U</sup> . Again, this oversimplification turns out to be unrealizable, because consistency with respect to fake proofs and corruption of parties must be taken into account.

First, VerifyGuilt checks if this particular pair (*pid*<sup>U</sup> , ) has already been defined and outputs whatever has been output before. This ensures consistent answers across different invocations. If (*pid*<sup>U</sup> , ) has neither been issued nor queried before *and* the affected user is corrupted, the adversary is allowed to decide if this proof of guilt should be accepted. This reflects that we do not protect corrupted users from false accusations of guilt. If the user is honest and (*pid*<sup>U</sup> , ) has neither been issued nor queried before, then the proof of guilt is marked as invalid. This protects honest users from being accused by fake proofs which have not been issued by the ideal functionality itself. Finally, the result is recorded for the future and output to the party. This possibility of public verification is vital to prevent the operator from wrongly accusing any user of double-spending and should for instance be utilized by the dispute resolver before it agrees to blacklist and therefore deanonymize a user on the basis of double-spending.

### **4.4.2 Wallet Blacklisting**

The task BlacklistWallet (cp. Fig. 4.15) is run between the operator and dispute resolver. The operator inputs a blacklisting tag bl which the operator has obtained at the end of IssueWallet to denote the wallet the operator wishes to blacklist. If BlacklistWallet succeeds, the operator

*ᵇ* Only an honest operator is guaranteed to get a blacklist for the "correct" wallet. N.b., is never output anywhere, hence the chance to meet an actually existing is neglible.

Figure 4.15: The Functionality Fapc (cont. from Fig. 4.1) – Task BlacklistWallet

receives a set of past and upcoming fraud-detection IDs *bl* as output so it may add them to the PoS blacklist *bl*. The dispute resolver inputs a user identity *pid*′ U to signal its consent to blacklist a wallet of that user. We assume that both parties negotiated on the user identity out-of-band before the task starts. For example, the operator might have presented a valid proof of guilt to the dispute resolver for that user or the user agreed to be blacklisted due to a lost wallet. Note, that IssueWallet (cp. Section 4.3.1 and Fig. 4.9) is identifying for the user. Hence, the operator knows which blacklisting tag are associated to which user.

Again, we start the description for the "good" case, i.e. for an honest operator that inputs a genuine blacklisting tag and an honest user. (N.b.: The dispute resolver is assumed to be always honest.) In a nutshell, Fapc first determines the wallet ID the blacklisting tag has been output for and looks up the associated user ID *pid*<sup>U</sup> from the transaction database. If the

Figure 4.16: The Functionality Fapc (cont. from Fig. 4.1) – Task RecalculateBalance

dispute resolver inputs the same user ID, Fapc checks how many values of (, ⋅) are already defined and extends to the first bound fraud-detection IDs, with bound being a parameter we assume to be greater than the number of transactions a user would be involved in within one billing period. To that end, yet undefined fraud-detection IDs (, ) with ≤ bound are uniquely and randomly drawn. This ensures that upcoming transactions use predetermined fraud-detection IDs that are actually blacklisted.

If a corrupted operator inputs a fake blacklisting tag the adversary is asked to provide a user ID *pid*<sup>U</sup> and a wallet ID which must not belong to an existing wallet. In this case the operator might eventually receive a set *bl* of bound random fraud-detection IDs, but these are never used in any task. If the user is corrupted, the adversary is allowed to choose the fraud-detection IDs.

#### **4.4.3 Balance Recalculation**

The task RecalculateBalance (cp. Fig. 4.16) is a one-party task executed by the operator. The operator inputs a set of fraud-detection IDs *bl* and a set of recalculation tags rc. The result is the accumulated sum of all transaction for which an associated recalculation tag has been provided and which are selected by a matching fraud-detection ID. Note that although the sum may contain the prices of transactions and wallets that have already been cleared, this does not falsify the value of bill as every successful execution of Disburse creates an entry with the amount that was disbursed as negative price.

The *intended* usage is to recalculate the balance bill of individual wallets or all wallets of a particular user. To this end, the operator needs to input a set *bl* of fraud-detection IDs that equals a complete or otherwise unaltered set *bl* as it has been output by BlacklistWallet (cp. Section 4.4.2) or a union thereof. Figuratively spoken, the intention is that operator handles the output set *bl* of BlacklistWallet as monolithic, opaque blocks and the input set *bl* is a union of these. Still, there is no guarantee that the operator inputs the union of "complete" sets *bl* . Moreover, the set *bl* and the set rc could also contain fake entries. In the latter case, the adversary is allowed to modify the result by an offset deviate without any restrictions.

In [Nag+20] the tasks BlacklistWallet and RecalculateBalance are a combined single task (called BlacklistUser there). The task has been split, because the dispute resolver is only required for the actual blacklisting part, but not for the recalculation part and keeping these parts separately make this clear.

#### **4.4.4 Prove of Participation**

This is a two-party task involving a user and the violation enforcer (cp. Fig. 4.17) and assumed to be conducted with every user which has been physically identified by the violation enforcer's and is suspected of having avoid Deposit. It allows well-behaving users to prove their successful participation in a transaction with the PoS at which the identification took place, while the fraudulent user will not be able to do so.

The violation enforcer inputs the identity of the suspected user *pid*<sup>U</sup> , the identity of the PoS *pid*<sup>P</sup> and a set pp of prove-participation tags which are related to the investigated transaction in a timely and spatial manner and must be provided by the PoS which reported the incident. The user inputs a single prove-participation tags pp after having learned which PoS and potential transactions are under investigation. Using the "right" pp allows users to prove their innocence.

Again, we describe the "good" case (all parties are honest, the input tags are genuine) first. If the provided prove-participation tag pp is in the set of investigated prove-participation tags pp and if the recorded user and PoS identities *pid*′ U , *pid*′ P of the transaction which is associated to pp match the identities under investigation, then the violation enforcer obtains a positive result, else a negative result.

If the prove-participation tag is a fake tag and if both the user and the PoS are corrupted, then the adversary may decide on the result. This may lead to a false positive result, but this is an inherent restriction of our setting with offline capabilities. Remember that the task Deposit is a two-party task between the user and the PoS. If the suspected user and the PoS are both corrupted and collude, the PoS is able to give a false testimony.

If the violation enforcer is corrupted, the prove-participation tag pp is leaked to the adversary. We like to stress that this is not only a peculiarity of our proposed implementation,

Figure 4.17: The Functionality Fapc (cont. from Fig. 4.1) – Task ProveParticipation

but inherent to the task. Assume that the user is able to successfully prove its participation. Although the violation enforcer only learns a single result bit, this is sufficient to find out which pp ∈ pp the user has input. The violation enforcer could repeatedly run the task and summon the user to prove its participation for a descending sequence of bi-sected sets until the last set only contains a single tag. Nonetheless, this does not affect the anonymity or unlinkability of any other transactions.

We are aware of the fact that this definition leaves room for an "attack" or more precisely an abuse by the PoS. The PoS triggers the violation enforcer to identify the offending user out-ofband (e.g. by taking a photo) and the same PoS also provides the set pp of prove-participation tags. Of course, the intended idea is that this set encompasses prove-participation tags which are somehow³ related to the whereabouts of the incident. However, the PoS could intentionally provide a wrong set pp which misses the relevant tags and thus make it impossible for the (innocent) user to exculpate themselves. This flaw cannot be fully resolved, but mitigated. A

³ We are intentionally vague here, because what the precise meaning of "whereabouts" depends on the concrete deployment.

possible solution requires the introduction of another ideal functionality⁴ and would very much deviate from [Nag+20]. This extension is discussed in Chapter 10. Moreover, note that a PoS cannot use this "attack" to target a specific user and the strategy poses the risk that not only a single, but multiple users are (falsely) found guilty. (Because the PoS cannot know which prove-participation tags should be omitted from pp and thus may drop too many.) However, as for a single transaction only one user can have been cheating, such an impossible result should be noticed and lead to an audit of the system. But this out of the scope of the security model.

⁴ This functionality needs to encapsulate the concept of "whereabouts".

# **5 System Discussion**

In this chapter we discuss some aspects of the definition of Fapc from Chapter 4.

Most importantly, we argue why Fapc captures an ideal model of a secure and privacypreserving anonymous point collection scheme. Especially, we illustrate how the high-level objectives of an anonymous point collection scheme (cp. Section 2.6) are reflected in Fapc. The properties (P1) to (P8) are consolidated under the term Operator Security and Correctness and discussed in Section 5.1, while properties (P9) to (P11) are summed up under User Security and Privacy in Section 5.2. Instead of defining a game-based security definition for each of these properties and then show that the yet to be defined implementation fulfills these games, we formalize the properties and show that the ideal functionality fulfills the properties.

Section 5.3 discusses some aspects of the user/PoS attributes, the leakage and the pricing function with a focus on anonymity. This section clarifies what kind of anonymity is considered in this thesis and likewise important what is *not* guaranteed.

Finally, Section 5.4 regards two aspects of the definition with nearby alternative approaches.

## **5.1 Operator Security and Correctness**

At the bottom line, operator security, especially correctness of billing, follows from the fact that Fapc represents an incorruptible accountant which manages all wallets and their associated transactions in a single, pervasive database *TRDB*. For example, in Deposit and Disburse a (possibly malicious) user only inputs a serial number to indicate which previous wallet state should be used. All relevant information is then looked up by Fapc internally. Similar observations hold for all other tasks.

Typically, an ideal functionality is a rather simple object (e.g., a commitment, an oblivious transfer, a coin toss) and it is mostly obvious from its definition that it captures the right notion of security and correctness. In contrast, our ideal functionality Fapc is already a complex system on its own with polynomially many parties that can reactively interact forever, i.e., Fapc itself has no inherent exit point except that at some point the polynomially bounded runtime of the environment is exhausted.

As already described in Section 4.1 the set of transactions *TRDB* is best visualized as a description of a directed graph with labeled nodes and edges. Section 4.1 has also brought forward the intended interpretation of particular structural properties of this graph (e.g. wallets correspond to trees; see there and in particular cf. Fig. 4.3). In this section we show that the definition of Fapc (cp. Sections 4.2 to 4.4) actually fulfills this intention. We give a series of graphtheoretic lemmas to prove that the desired structural properties hold after each invocation of a task of Fapc as a sort of invariant and thereby assert that there is no execution of Fapc which can deviate. These lemmas include that the graph as a whole is a directed forest where each tree corresponds to a wallet ID , double-spending corresponds to branching and different wallet states have the same fraud-detection ID if and only if they have the same depth in the same tree. Moreover, these lemmas a closely associated to the desired properties of an anonymous point collection scheme (cp. Section 2.6).

**Definition 5.1 (Ideal Transaction Graph)** *The transaction database TRDB* = {*trdb* } *with*

$$
\Gamma \mathfrak{t} \mathfrak{t} db = (\mathsf{s}^{\text{prev}}, \mathsf{s}, \varphi, \mathsf{x}, \lambda, \mathsf{p} \mathit{id}\_{\mathfrak{U}}, \mathsf{p} \mathit{id}\_{\mathfrak{P}}, \mathsf{p}, b, \omega\_{\text{ds}}, \omega\_{\text{rc}}, \omega\_{\text{pp}}) \in \mathsf{TRDB} \tag{5.1}
$$

*is a directed, labeled graph with vertices identified by , edges identified by* (prev, )*, vertex labels given by* (, , , *pid*<sup>U</sup> , ) *and edge labels given by* (*pid*<sup>P</sup> , , ds, rc, pp)*. This graph is called the* Ideal Transaction Graph*.*

**Lemma 5.2 (Forest Structure of the Ideal Transaction Graph)** *The Ideal Transaction Graph TRDB is a forest.*

Proof *TRDB* is a forest, if and only if it is cycle-free and every node has in-degree at most one. A new node is only inserted in the scope of IssueWallet, Deposit or Disburse. Proof by Induction: The statement is correct for the empty *TRDB*. If IssueWallet (cp. Fig. 4.9) is invoked, a new node with no predecessor is inserted. Moreover, the serial number of the new node is randomly chosen from the set of unused serial numbers, i.e., it is unique and no existing node can point to the new node as its predecessor. If Deposit (cp. Figs. 4.10 and 4.11) or Disburse (cp. Fig. 4.12) is invoked, a new node is inserted that points to an existing node. Again, the serial number of the new node is randomly chosen from the set of unused serial numbers, i.e., it is unique and no existing node can point to the new node as its predecessor. Hence, no cycle can be closed. Since the only incoming edge of a node is defined by the stated predecessor prev (which may also be ⊥), each vertex has in-degree at most one.

**Lemma 5.3 (Tree-wise Uniqueness of the Wallet Identifier)** *The wallet ID maps oneto-one and onto a connected component (i.e., tree) of the Ideal Transaction Graph.*

Proof " ⟸ ": Let *trdb* be an arbitrary node in *TRDB* and be its wallet ID. Furthermore, let *trdb*<sup>∗</sup> be the root of the tree containing *trdb* . Then on the (unique) path from *trdb*<sup>∗</sup> to *trdb* ,

every node apart from *trdb*<sup>∗</sup> was inserted by means of either Deposit (cp. Figs. 4.10 and 4.11) or Disburse (cp. Fig. 4.12), both of which ensure the inserted node has the same as its predecessor. By induction over the length of the path, *trdb* has the same wallet ID as *trdb*<sup>∗</sup> and hence the wallet ID is a locally constant function on *TRDB*.

" ⟹ ": For contradiction assume there are two nodes *trdb* and *trdb* with equal wallet IDs = in two different connected components. Pick the root nodes *trdb*<sup>∗</sup> and *trdb*<sup>∗</sup> of their respective trees. By " ⟸ " it we get ∗ = = = <sup>∗</sup> , i.e., the root nodes have equals wallet IDs, too. Both root nodes are inserted in the scope of IssueWallet and the wallet ID is randomly drawn from the set of *unused* wallet IDs, i.e., they cannot both have the same wallet ID. Contradiction!

**Lemma 5.4 (Tree-wise Constness of the User PID)** *Within a tree of the Ideal Transaction Graph the PID pid*<sup>U</sup> *of the corresponding user is constant.*

Proof Same proof as " ⟸ " in the proof of Lemma 5.3.

In other words, Lemma 5.4 states that a wallet (a tree in *TRDB*) is always owned by a distinct user. But a user can own multiple wallets.

#### **Lemma 5.5 (Layer-wise Uniqueness of the Fraud-Detection Identifier)**


Proof Proof by Induction. The statement is true for the empty *TRDB*. In the scope of IssueWallet (cp. Fig. 4.9) a new root node is inserted, IssueWallet sets ≔ 0 and an unused is chosen. In the scope of Deposit or Disburse, is calculated as ≔ prev + 1, where by induction prev is the depth of its predecessor. With respect to we note that when inserted, every node gets as fraud-detection ID the value stored in (, ) which only depends on the node's wallet ID and depth. When this value is set (in either IssueWallet, Deposit, Disburse or BlacklistWallet, cp. Figs. 4.9 to 4.12 and 4.15) it is chosen from the set of *unused* fraud-detection IDs and therefore unique for given and .

So far, the lemmas above have not had a concrete semantic interpretation in terms of an anonymous point collection scheme. This changes for the upcoming lemmas.

**Lemma 5.6 (Billing Correctness)** *Let trdb* = (prev, , , , , *pid*<sup>U</sup> , *pid*<sup>P</sup> , , , ds, rc, pp) *be an arbitrary but fixed node. If trdb is not a root let trdb*prev = (prev,prev, prev, prev, prev , , *pid*<sup>U</sup> , *pid*prev P , prev, prev, prev ds , prev rc , prev pp ) *be its predecessor. Then* = prev + *holds for non-root nodes and* = ⊥*,* = 0 *for root nodes.*

Proof Same induction argument as in proof of Lemma 5.5.

**Lemma 5.7 (Double-Spending Detection Completeness)** *Let the operator be honest and let a user (possible malicious) with PID pid*<sup>U</sup> *have committed double-spending while interacting with two honest PoSes with PIDs pid*<sup>P</sup> *and pid*′ P *.¹ Let* ds*,* ′ ds *denote the corresponding doublespending tags that are output at the PoS' side.*

	- (2) Due to Item (1) (*pid*<sup>U</sup> , ) ≔ OK holds. This is the return value of VerifyGuilt (cp. Fig. 4.14).

**Lemma 5.8 (Correctness of Wallet Blacklisting and Balance Recalculation)** *Let* bl *an arbitrary but fixed blacklisting tag which* O *has received as output from* IssueWallet *for a wallet with ID . Let the operator* O *and all PoSes which interacts with this wallet be honest. Under the assumption that the wallet is used in less than* bound *transactions, i.e., in less than* bound *invocations of* IssueWallet*,* Deposit *and* Disburse*, the following two statements hold:*


¹ The PoS might be the same in both interactions.

² We postulate that the dispute resolver agrees to blacklist the affected user.

	- (2) Let prev denote the serial number for which Deposit is invoked and let *trdb*prev = (⋅, prev , prev, prev, , …) be the corresponding transaction entry. By assumption prev < bound holds. As BlacklistWallet has previously been called, = (, prev + 1) is already fixed. Moreover, ∈ *bl* ⊆ *bl* holds and thus Deposit aborts.
	- (3) By assumption fake rc = ∅ holds in RecalculateBalance and thus deviate = 0 follows (cp. Step 6 in Fig. 4.16). Let

$$\Xi\_{\lambda} := \{ (\mathbf{s}, \mathbf{p}) \mid \mathbf{t} \mathbf{r} \mathbf{b} = (\cdot, \mathbf{s}, \cdot, \lambda, \cdot, \cdot, \cdot, \mathbf{p}, \cdot, \cdot, \cdot, \cdot) \in \text{TRDB} \} \tag{5.2}$$

the set of all (, )-pairs for the wallet with wallet ID under consideration. Let

$$\Xi := \{ (\mathsf{s}, \mathsf{p}) \mid \exists \, trdb = (\mathsf{s}, \mathsf{s}, \mathsf{p}, \cdot, \cdot, \cdot, \mathsf{p}, \cdot, \cdot, \omega\_{\mathsf{rc}}, \cdot) \in \mathsf{TRDB} \land \omega\_{\mathsf{rc}} \in \Omega\_{\mathsf{rc}}^{\mathrm{genuine}} \land \mathsf{p} \in bl\_{\mathsf{Q}} \} \tag{5.3}$$

be as in Step 4 of Fig. 4.16. We have to show that = holds.

" ⊆ ": Let (<sup>∗</sup> , ) ∈ and let *trdb*<sup>∗</sup> = (⋅, <sup>∗</sup> , , , ⋅, ⋅, ⋅, , ⋅, ⋅, rc, ⋅) ∈ *TRDB* be the corresponding transaction entry. ∈ *bl* = *bl* follows by Item (1) and rc ∈ genuine rc follows by assumption. This yields *trdb*<sup>∗</sup> ∈ .

" ⊇ ": Assume there is a (<sup>∗</sup> , ) ∈ ⧵ and let *trdb*<sup>∗</sup> = (⋅, <sup>∗</sup> , , ′ , ⋅, ⋅, ⋅, , ⋅, ⋅, rc, ⋅) ∈ *TRDB* the corresponding transaction entry. *trdb*<sup>∗</sup> ∉ implies ≠ ′ . However, this means there exists , ′ ∈ {0, … , bound} s.t. (, ) = = (′ , ′ ) and there exist *trdb* = (⋅, ⋅, ⋅, , ⋅, ⋅, ⋅, ⋅, ⋅, ⋅, rc, ⋅) ∈ *TRDB* for the "correct" and identical rc. Each of the condition is impossible and an immediate contradiction: is injective (cp. Lemma 5.5) and in Deposit as well as Disburse a unique recalculation tag rc is selected (cp. Step 10 in Fig. 4.11 and Step 7 in Fig. 4.12).

We now discuss why the properties properties (P1) to (P8) are fulfilled by Fapc.


## **5.2 User Security and Privacy**

In this section we argue why Fapc implements the properties (P9) to (P11). As in the previous section, we first show two lemmas.

**Lemma 5.9 (Double-Spending Detection Soundness)** *Let the user with pid*<sup>U</sup> *be honest and not have committed double-spending.*


(2) The task VerifyGuilt first checks if (*pid*<sup>U</sup> , ) has already been defined. (*pid*<sup>U</sup> , ) is only defined in Deposit (cp. Step 6 in Fig. 4.10), Disburse (cp. Step 5 in Fig. 4.12), DetectDS (cp. Fig. 4.13) or VerifyGuilt (cp. Fig. 4.14). The assumption that the user has not committed double-spending immediately rules out the first two options and the Item (1) rules out the third option. If (*pid*<sup>U</sup> , ) has been defined by VerifyGuilt, then VerifyGuilt outputs the same result as in the previous invocation. Hence, w.l.o.g. it suffices to consider first time invocations of VerifyGuilt and to assume that (*pid*<sup>U</sup> , ) is undefined. In this case, VerifyGuilt returns NOK irrespective of the input (cp. Step 3 in Fig. 4.14).

**Lemma 5.10 (Prove of Participation Completeness)** *Let the user be honest and* pp *the prove-participation tag which the user has obtained as output from* Deposit *in an interaction with a (possibly malicious) PoS. Then,* ProveParticipation *returns* OK *upon input* (*pid*<sup>U</sup> , *pid*<sup>P</sup> , pp) *for a* pp *with* pp ∈ pp*.*

Proof As the user is honest and pp genuine, a corresponding *trdb* ∈ *TRDB* has been recorded by an execution of Deposit. The statement follows from the definition of ProveParticipation (cp. else-branch of Step 5 in Fig. 4.17).

The information leakage that needs to be considered for an assessment of user privacy directly follows from the in- and output as well as the explicit leakage of the ideal functionality. We stress that we only care about privacy for honest, well-behaving, non-blacklisted³ users.


³ Note that the operator may only blacklist users with the help of the incorruptible dispute resolver who only cooperates if the user has agreed or misbehaved.


Table 5.1: Information an adversary learns about honest users.

Lemma 5.10 asserts that suspected but inculpable users who successfully completed Deposit and are accidentally part of the investigation can prove their innocence. In ProveParticipation Fapc only outputs a single bit whether the user has successfully participated or not. The violation enforcer who does not learn anything about any of the user's transactions beyond that.

(P11) *Protection Against False Accusation:* Follows from Lemma 5.9.

## **5.3 Impact of the Attributes and Leakage on the Privacy Level**

The ideal functionality provides unlinkability of transactions (cp. property (P9)) up to what can be deduced from user and PoS attributes and the leakage of the ideal functionality. As already discussed in Section 2.4, we assume these attributes to be sufficiently indistinct that they do not enable tracking of the user. This is not ensured within the scope of Fapc, apart from outputs to the users, which enable them to check themselves that they are not identifying. Since we prove our realization P5C to be indistinguishable from the ideal functionality Fapc, it is ensured that an adversary attacking P5C in the real world can only learn as much about a user as an adversary in the ideal model.

Table 5.1 summarizes what an adversary learns about the users in each task. We omitted the serial number and the fraud-detection ID in the table as these are independently and uniformly drawn randomness and thus cannot be exploited (see (P9) in Section 5.2). In all tasks except Deposit the user's identity *pid*<sup>U</sup> is leaked. The variables U, P, prev P and <sup>O</sup> refer to attributes of the participating parties. The variable denotes the price of a Deposit transaction, and bill is the total debt the user owes at the end of the task Disburse.

Let's call the period from the point of time at which a wallet is issued until the point of time at which its points are disbursed, the lifetime of a wallet.⁴ For every wallet's lifetime, the operator collects all transaction information from every PoS. Hence, the operator eventually possesses two datasets:


With respect to practical privacy considerations one can naturally pose several questions: Can a single transaction be linked to a specific user? Has a user deposited points at a particular PoS? Can a user be mapped to a complete sequence of consecutive transactions? A final answer to these questions crucially depends on the concrete instantiation of the attributes U, <sup>P</sup> and the pricing function but also on "environmental" parameters that cannot be chosen by the system designer such as the total number of registered users, the average number of transactions within the lifetime of a wallet, etc. An in-depth analysis would require plausible and justifiable assumptions about probability distributions for these parameters, and would constitute a separate line of research in its own right.

In the following, however, we would like to elaborate a bit on the general aspects of the question, how a user can be linked to a wallet's transactions. This problem can be depicted as a graph-theoretical problem of finding a path in a directed graph. In short, the problem is to reconstruct the (unknown) ideal transaction graph *TRDB* from a given set of nodes without the edges. More precisely, a problem instance is given by a graph which consists of initial nodes, inner nodes and terminal nodes. Initial nodes correspond to root nodes of *TRDB*, represent IssueWallet transactions and are linked to users. Terminal nodes correspond to leaf nodes of *TRDB*, represent Disburse transactions and are also linked to users and final balances bill. Inner nodes represent the anonymous Deposit transactions in between. A directed edge connects two nodes if the target node is a plausible successor of the source node. Hence, the set of examined edges is a superset of the edges in *TRDB*. Especially, the graph is not a directed forest and the problem is to select a subset of those edges such that the "true" *TRDB* is reconstructed.

The complexity of the task obviously depends on how much larger the superset of edges is compared to the size of the true set. Assuming that transactions can only occur at discrete

⁴ N.b., the explanations are written with the running prime example of a post-payment scheme (cf. Sections 2.3.3 and 2.3.4) in mind. Adopted considerations hold for the other scenarios.

points in time, the inner nodes can be ordered in layers. As a bare minimum, an edge is only plausible and thus contained in the superset, if the connected nodes have equal user attributes U, the attribute prev P of the target node equal <sup>P</sup> of the source node and the target node is in a later layer than the source node (because time can only increase). Additionally, background knowledge such as the geo-position of the PoSes, etc. can be utilized to further reduce the superset of plausible edges and thereby simplify the search problem.

For privacy, two characteristics are important: How many solutions do exist and what is the computational complexity to find one (or all) solutions? This results in a trade-off between two borderline cases:


If additional background information is omitted, the problem can be cast as a specialized instance of various NP-complete problems, e.g., the parallel-version of the KNAPSACK problem or a generalized version of the PARTITION SUM problem with variable partition sizes. For general instances, these problems are NP-complete. This is beneficial as it implies that finding a solution is generally believed to be intractable. However, there might be good heuristics for all "natural" instances. Moreover, depending on the concrete parameters (e.g., an upper bound on the maximum price or the balance bill) the problem might become fixed-parameter tractable [Alb+18]. In other words, although solving the general problem is assumed to have superpolynomial runtime in the instance size, it might still be practically solvable for "real world" instances. We stress again, that an in-depth analysis requires to look at concrete distributions of these parameters which may be the basis for an independent work.

Nonetheless, there are indicators that—if finding one solution is easy—there might be a myriad of solutions, which again yields privacy. In [Nag+20], the authors sketch such estimation for the German Toll Collect System which indicates that the solution space for mapping a particular user (there: truck) to a specific wallet (there: trip) might be vast.

In practice, several privacy notions like -anonymity are established. For several reasons these notions are not directly applicable here. First of all, these notions evaluate the privacy level of a concrete dataset and we stress again that this is out of the scope of this work. While at first glance the calculations above might suggest that our system features -anonymity [Swe02] for some yet to be determined , the notion of -anonymity is actually not applicable due to formal reasons. The definition of -anonymity requires the database to have exactly

one entry for each individual, but our transaction database features several entries per user. Therefore, the notion of -anonymity is syntactically not applicable to the users of our system. While we could still discuss -anonymity in this setting if the operator combined all entries that pertain to the same user into one single entry, privacy of our system largely stems from the operator not being able to link transactions of the same user in this way and hence such a discussion would largely undervalue the privacy protection P5C provides.

## **5.4 Alternative Approaches**

In this section we would like to discuss alternatives for two aspects of the definition. The first section sketches a different approach to model the distributed state of the system and the resulting inconsistencies instead of using tags. The second section does actually not present a proper alternative, because the approach turns out to be non-working. However, it is still presented, because it seems to be a very obvious approach at first glance.

### **5.4.1 An Alternative to Tags and the Case of [Nag+20]**

In order to accurately model a distributed system with inconsistent knowledge of the individual parties, two nearby solutions exist:


The main advantage of the alternative approach is that it provides a cleaner interface as honest parties do not export any internal information to the caller protocol. Also, this definitional approach seems to be better decoupled from a particular implementation at first glance. However, this first impression fails and it has turned out that getting the definition right under this

approach ends up in an incomprehensible clutter of indices. This runs contrary to the idea that the ideal functionality should grasp a plausible definition of security.

We argue that both approaches are mostly equivalent and more a question of style. In both cases, a potential implementation would most likely use some sort of tags in the one or other way to synchronize the parties' state from time to time. The main difference is that these tags are either (1) transported over the usual communication channel using incoming/outgoing message tapes, or (2) transported with the help the environment using input/output. In both cases, the environment has the same set of possibilities to maliciously manipulate these tags either directly by itself or with the help of the dummy adversary.⁵

Using the proposed approach, it is rather evident that Fapc must not make any assumptions about how the framing protocol, i.e. the environment, forwards the tags. For example, there is even no guarantee that a tag that is later input into a task has ever been output before by another task. Making the tags an explicit part of the ideal functionality enables to formalize conditions on non-excludable attacks in more comprehensible way. Hiding away these tags under the cover of an indirect indexing scheme leads to more artificial conditions that are not easily justifiable.

The approach taken in [Nag+20] lies somewhere in the middle of both approaches. Tags are not part of the ideal functionality, i.e. the in-/output interfaces are very clean, but the ideal functionality does not keep track of what is known by which party neither. Instead, the real implementation provides tags, but keeps them in the local state of each party and assumes that the internal state of one party is "magically" transported to another party by some not explicated mechanism of synchronization that is spontaneously executed when favorable. In some sense, the protocol in [Nag+20] implicitly re-introduces the assumption of globally available and perfectly consistent state. Strictly spoken, it does not realize the proposed ideal functionality there. Unfortunately, the problems have not only been a lack of thorough formalism, but as it has turned out the protocol in [Nag+20] actually contains some insecurities that have been overlooked. These insecurities have been unveiled during the write-up of this thesis and fixed.

#### **5.4.2 Balance Recalculation**

The task RecalculateBalance is defined as a one-party task which is conducted by the operator who exclusively provides the input. If the operator provides faulty input, the calculated balance is wrong (cp. Section 4.4.3). In essence, the task RecalculateBalance provides very little correctness guarantees. This corresponds to the typical "bogus-in-bogus-out" principle. At

⁵ Formally, this is not completely true. Using the alternative approach, the environment might be required to formally corrupt one of the communication parties first, before gaining all the options it immediately has under the approach used here. But this technicality is irrelevant for the point we want to make.

first glance it might seem that this only affects the operator itself but has no security impact on other parties. This is true on a technical level within the scope of the security model. However, a wrong balance might have an impact in the "real world", if the result is used to file a claim against a user. Hence, a more practical solution should also provide evidence that the result is correct or allow the user to appeal against it.

In [Nag+20] the definition of the ideal task BlacklistUser (including the recalculation part⁶) provides stronger guarantees than what is actually fulfilled by the implementation in [Nag+20]. Again, the root of the problem is that the synchronization of states is not modeled in [Nag+20]. In contrast to other fixes between this thesis and [Nag+20], this problem has been fixed by unilaterally weakening Fapc and not tackling the implementation.

We are convinced, that a stronger (and more useful) variant of the task RecalculateBalance could be provided, but at the costs of major rework of the implementation. In a nutshell, the prove-participation tags and recalculation tags have to be fused into one kind of tag and the implementation Deposit/Disburse require an additional round of communication. Moreover, RecalculateBalance needs to be converted into an optional⁷ two-party task between the operator and user. The presentation here mostly follows [Nag+20] and the envisioned improvements are discussed in Chapter 10.

#### **5.4.3 The Commitment Problem and the Lack of Modularity**

As stated in Chapter 3 the UC framework does not only provide strong security guarantees under concurrent execution in arbitrary environments, but comes with two great promises: composition and modularity. However, the definition of Fapc in Chapter 4 captures anonymous point collection in a monolithic functionality and only makes little use of modularity. Section 1.2 touches a tempting alternative: define an ideal functionality for each of the tasks, realize each of them by a protocol, analyze their security separately and deduce the security of the system using the UC composition theorem. If this was possible, this would make the proofs much easier and give higher confidence that they are correct.

This raises the question why such a modular approach has not been used. Chapter 4 argues that a monolithic definition allows to define the security of anonymous point collection more evidently, because a global database that keeps track of all transactions conveys a direct semantic interpretation and avoids a slew of technical subtleties due to the distributed state between the individual tasks that would arise otherwise. However, the problem is actually more than a skin-deep technicality. The rest of this section does not try to present an alternative approach, but argues why this alternative approach turns out to be infeasible.

⁶ In [Nag+20] BlacklistUser combines the tasks BlacklistWallet and RecalculateBalance.

⁷ The participation of the user must be optional in order to cover those cases in which the user refuses to participate.

Although UC promises modular proofs, complex protocols which have been proposed with a UC-proof rarely use this feature. Instead they are formalized as monolithic entities and proven secure from scratch. Fapc is no exception. Other examples are UC-formalizations of P-signatures [Cam+15] or commit-and-prove [Lin03]. We encounter the so-called "commitment problem" [CDT19] of simulation-based security definitions.

For the rest of the discussion, we need push forward some of the cryptographic building blocks, namely commitments as well as ZK-proofs (cp. Section 6.2), and how they are used in the realization (cp. Chapter 7). Readers who are completely unfamiliar with these building blocks should skip the rest of this section on a first reading.

A typical structure found in many complex protocols is a commit-and-prove construction, which is also widely used in our proposed realization. A party commits to a secret message and proves in zero-knowledge that the message hidden inside the commitment fulfills some predicate . In other words, the receiver of the commitment is not only asserted that the sender is bound to a value, but that this value fulfills certain constraints given by . Although abstract UC-functionalities Fcom and FZK for commitments and zero-knowledge proofs, resp., exist [e.g., cp. Can05], these functionalities cannot be used to modularly construct a UCfunctionality Fcp which in turn captures commit-and-prove. The underlying reason is that the actual primitives, i.e. real commitments and real ZK-proofs, offer interfaces which do not exist in the ideal functionalities. This allows to assemble these primitives in the real model in a way that cannot be achieved in the ideal model. For the scenario at hand, e.g., the commit-and-prove construction, the problem is also illustrated in Fig. 5.1.

The ideal functionality FZK for zero-knowledge proofs is parameterized by a statementwitness relation *gp* = (*stmnt*,*wit*). When the prover inputs a tuple (*stmnt*,*wit*), FZK checks if *gp* is fulfilled and only outputs the decision bit to the verifier. For the commit-and-prove construction, the concrete ZK-relation is *gp* ≔ {(, (, )) | | () ∧ Open(, , ) = 1} with the commitment = *stmnt* as the statement and the secret message together with the opening information (, ) = *wit* as the witness.

The ideal functionality Fcom for a commitment receives the message from the committer, stores internally and only outputs a notification bit to the receiver. Later, when the committer decides to unveil the message, Fcom forwards to the receiver. Please note, that the receiver does not learn anything beyond a single bit in the commitment phase. Especially, there is no commitment message . Fcom is information-theoretically secure. There is no decommitment neither, not even at the committer's side.

But this is exactly the point, where the step-by-step transition from the real to the ideal model fails. FZK expects and as part of its input, which are output by the real commitment scheme, but not by its ideal counterpart.⁸ Interestingly, the (completely) real construction is a UC-secure realization of Fcp⁹ nonetheless, but this can only be proven in a monolithic way not by composition.

Figure 5.1 shows a commuting diagram which illustrates the favored, but failing construction. The upper left corner represents a completely real protocol consisting of a real commitment scheme com, a real ZK-scheme ZK and a base protocol cp. The base protocol receives a message from the environment, input into com, receives a decommitment , feeds this into ZK and finally outputs what ZK outputs. Please note, that the construction only works, because the base protocol exploits com. The base protocol access com in a whiteboxfashion and directly utilizes the commit message and decommitment information , from the underlying commitment primitive, although , is not part of the prescribed output of Fcom (illustrated by the curly line). The lower left corner represents the ideal functionality Fcp. Going down from upper left to lower left is possible and represents the monolithic proof. However, the preferred, modular proof would first go the upper right. The upper right corner represents a hybrid in which ZK has been replaced by FZK. This step is (syntactically) possible and the proof (using suitable building blocks) also holds. However, replacing com by Fcom is impossible, as the decommitment is lost. I.e., the construction in the lower right corner does not even exist.

At first glance, the problem arises from the fact that Fcom provides information-theoretic security, while a UC-secure real commitment only provides computational security.¹⁰ Having in mind, that the completely real construction is indeed UC-secure, one might conjecture that the ideal formalizations of the building blocks are overly strong beyond what is required for security. To address this problem, a number of approaches have been proposed—none of them, however, being able to fully satisfactorily formalize the weaker guarantees achieved by regular schemes. First, the notion of non-information oracles [CK02] has been proposed that essentially embeds a game-based definition in a composable abstraction module. Unfortunately, it remains unclear what "kind" of security this new notion implies. In essence, a non-information oracle provides the missing piece of information. But there is no sound justification why the facilitated construction remains secure, especially why it does not fail under arbitrary composition.

⁸ Note that we are deliberately lying at this point. Of course, the real UC-commitment scheme com must not output the decommitment to the committer, because otherwise it would not even syntactically realize Fcom due to a different IO interface. The crucial point is that the commit-and-prove construction uses the game-based notion of a commitment scheme or makes white box access into com.

⁹ Again, we are deliberately lying. Of course, there is no automatism that all combinations of a real commitment scheme and a real ZK-protocol which are composed this way, yield a UC-secure commit-and-prove scheme. The commitment scheme and the ZK-protocol still needs to be carefully chosen.

¹⁰ This is so, as a UC-commitment needs to be extractable.

? ≥

Construction does not exist!

Figure 5.1: The commitment problem in case of commit-and-prove

Only very recently, Camenisch, Drijvers, and Tackmann [CDT19] have captured this lack of modular proofs as a generic problem and proposed a new framework called multi-protocol UC. The essential trick is not to consider a single challenge protocol com or ZK and replace them step-by-step, but consider a set of challenge protocols (com, ZK) and replace them en bloc. The obvious drawback is that one must still separately prove that (com, ZK) jointly realize (Fcom,FZK). Hence, the approach is not completely modular but still somewhat modular. Moreover, it is not completely clear, whether the construction is flawed. For the proof Camenisch, Drijvers, and Tackmann [CDT19] assume the verifier to be incorruptible which is a very strong and unrealistic assumption. They only argue that this condition can be dropped.

On top, our proposed construction requires a feature which has not been addressed so far. We require *homomorphic* commitments. This make the problem of modularity even worse.

## **6 Assumptions and Building Blocks**

In this chapter we introduce the algebraic setting and building blocks we make use of. In particular, the latter includes non-interactive zero-knowledge proofs, commitments, signatures, encryption and pseudo-random functions. Due to efficiency reasons our building blocks are not completely generic and do not work over sets of arbitrary (unstructured) bit strings, but are algebraic and share particularly related groups as their common definitional space. In Section 6.1 we describe this common framework. In Section 6.2 we define the building blocks, describe possible instantiations for these building blocks and explain how these primitives are used in our system.

## **6.1 Algebraic Setting and Hardness Assumptions**

The following definitions are adopted from [Nag+17].

#### **Definition 6.1 (Pairing)**

*(1) Let* <sup>1</sup> = ⟨<sup>1</sup> ⟩*,* <sup>2</sup> = ⟨<sup>2</sup> ⟩*,* <sup>T</sup> = ⟨T⟩ *be three cyclic groups of prime order and* <sup>1</sup> *,* <sup>2</sup> *,* T*, resp., their generators. A map* ∶ <sup>1</sup> × <sup>2</sup> → <sup>T</sup> *with*

$$\forall a \in G\_1, b \in G\_2, \mathbf{x}, \mathbf{y} \in \mathbb{Z}\_{\mathfrak{p}} \,:\, e(a^{\mathbf{x}}, b^{\mathbf{y}}) = e(a, b)^{\mathbf{x}\mathbf{y}} \tag{6.1}$$

*is called* bilinear *or a* pairing*.*

*(2) A pairing is called* non-degenerated *if and only if* (<sup>1</sup> , 2) *generates* T*.*

Please note, that the co-domain of a pairing is a sub-group of T. Hence, for prime order groups, is either trivial, i.e. (, ) = 1 ∀ ∈ <sup>1</sup> , ∈ <sup>2</sup> or non-degenerated, i.e. generates the whole target group.

Moreover, is called efficiently computable, if there is an efficient algorithm that evaluates on its inputs. In cryptography, we are only interested in "useful" pairings that are nondegenerated and efficient. From here and below the term "pairing" always implicitly denotes this particular kind of pairing.

**Definition 6.2 (Prime-order Bilinear Group Generator)** *A* prime-order bilinear group generator *is a PPT algorithm* Setup *that on input of a security parameter* 1 *outputs a tuple of the form*

$$\text{g.p} := (G\_1, G\_2, G\_\Gamma, \mathbf{e}, \mathfrak{p}, \mathfrak{g}\_1, \mathfrak{g}\_2) \leftarrow \text{Setup}(1^n) \tag{6.2}$$

*with* <sup>1</sup> *,* <sup>2</sup> *,* <sup>T</sup> *being descriptions of cyclic groups of prime order ,* log = ()*,* <sup>1</sup> *being a generator of* <sup>1</sup> *,* <sup>2</sup> *being a generator of* <sup>2</sup> *, and* ∶ <sup>1</sup> × <sup>2</sup> → <sup>T</sup> *being a (non-degenerated, efficient) pairing. W.l.o.g we assume* <sup>T</sup> = (<sup>1</sup> , 2)*. We call gp a (prime-order) bilinear group description.*

A bilinear group description can be typed according to how the involved groups relate to each other with respect to computational complexity.

**Definition 6.3 (Types of Bilinear Group Setting)** *Let gp* ∶= (<sup>1</sup> , <sup>2</sup> , T, , , <sup>1</sup> , <sup>2</sup> ) *be a bilinear group description as above.*


In the remainder of this thesis, we only consider the asymmetric setting.

Most of our building blocks make use of a particular projection *gp* ∶ ℤ<sup>∗</sup> × <sup>∗</sup> 1 × ℤ<sup>∗</sup> × <sup>∗</sup> <sup>2</sup> → ∗ 1 × <sup>∗</sup> <sup>2</sup> with ∗ denoting an arbitrary number of components. For lack of a better name we simply call it the *gp*-mapping.

**Definition 6.4 (***gp***-mapping)** *Let* <sup>1</sup> = ⟨<sup>1</sup> ⟩ *and* <sup>2</sup> = ⟨<sup>2</sup> ⟩ *as before. For* , , , ∈ ℕ *let the family of functions* { (,, ,) *gp* } ,, , *be defined as*

$$F\_{\mathcal{GP}}^{(\alpha,\beta,\chi,\delta)}: \left\{ (n\_1, \ldots, n\_{\alpha}, h\_{1,1}, \ldots, h\_{1,\beta}, m\_1, \ldots, m\_{\gamma}, h\_{2,1}, \ldots, h\_{2,\delta}) \mapsto \\ \begin{aligned} &\mathbb{Z}\_{\mathfrak{p}}^{\alpha} \times G\_1^{\beta} \times \mathbb{Z}\_{\mathfrak{p}}^{\gamma} \times G\_2^{\delta} \to G\_1^{\alpha+\beta} \times G\_2^{\gamma}, \\ &(g\_1^{n\_1}, \ldots, g\_1^{n\_\alpha}, h\_{1,1}, \ldots, h\_{1,\beta}, g\_2^{m\_1}, \ldots, g\_2^{m\_\gamma}, h\_{2,1}, \ldots, h\_{2,\delta}) \end{aligned} \right. \\\\ \left\{ \begin{aligned} &\mathbb{Z}\_{\mathfrak{p}}^{\alpha} \times G\_1^{\beta} \times \mathbb{Z}\_{\mathfrak{p}}^{\gamma} \times G\_2^{\gamma} \to G\_1^{\alpha+\beta} \times \mathbb{Z}\_{\mathfrak{p}}^{\gamma}, \\ &(g\_1^{n\_1}, \ldots, g\_1^{n\_\alpha}, g\_2^{m\_1}, \ldots, g\_2^{m\_\gamma}, h\_{2,1}, \ldots, h\_{2,\delta}) \end{aligned} \right. \right\}^{\alpha}$$

*Informally,* (,, ,) *gp maps* ℤ *-elements to* <sup>1</sup> *and* <sup>2</sup> *, resp., by exponentiation and acts on* <sup>1</sup> *and* <sup>2</sup> *as the identity function.*

*Then, gp is defined as the "union" over this family, or more precisely gp*() ≔ (,, ,) *gp* () *for* ∈ ℤ × 1 × ℤ × 2 *.*

Beware, that (,0, ,) *gp* and (′ ,0,′ ,) *gp* are syntactically indistinguishable for + = ′ + ′ and = 0, but in the following the correct domain is always clear from the context. *gp* is indeed a projection as *gp* ∘ *gp* = *gp* (for matching choices of dimensions). Moreover, if the domain is fixed (i.e. for given , , , ), then *gp* is injective and thus invertible¹ on the restricted domain. (This is important in the later definition of proof of knowledge.)

We are now ready to present the hardness assumptions which the instantiations of our building block rely upon. This concludes this section.

The co-CDH assumption is defined as follows

**Definition 6.5 (Co-CDH Assumption)** *Let gp* ≔ (<sup>1</sup> , <sup>2</sup> , T, , , <sup>1</sup> , <sup>2</sup> ) ← Setup(1 )*. We say that the* co-CDH assumption *holds with respect to gp, if the advantage* AdvCO−CDH *gp*,<sup>A</sup> (1 ) *defined by*

$$\Pr\left[a = \operatorname{g}\_2^{\chi}\left|\begin{array}{c} \mathbf{x} \xleftarrow{\operatorname{R}} \mathbb{Z}\_{\mathfrak{p}}\\ a \leftarrow \mathcal{H}(\mathbf{1}^n, \operatorname{g}\rho, \operatorname{g}\_1^{\chi}) \end{array} \right.\right] \tag{6.4}$$

*is negligible in for all PPT algorithms* A*.*

The SXDH assumption essentially asserts that the DDH assumption holds in both source groups <sup>1</sup> and <sup>2</sup> and is formally defined as:

**Definition 6.6 (SXDH Assumption)** *Let gp* ≔ (<sup>1</sup> , <sup>2</sup> , T, , , <sup>1</sup> , <sup>2</sup> ) ← Setup(1 )*.*

*(1) We say that the* DDH assumption *holds with respect to gp over if the advantage* AdvDDH *gp*,,A(1 ) *defined by*

$$\left| \Pr \left[ b = b' \; \middle| \begin{array}{c} \ge , \, y, z \xleftarrow{\mathbb{R}} \mathbb{Z}\_p; h\_0 := \operatorname{g}\_l^{\chi \chi}; h\_1 := \operatorname{g}\_l^z \\\\ b \xleftarrow{\mathbb{R}} \{0, 1\} \end{array} \right| - \frac{1}{2} \right| \tag{6.5}$$

*is negligible in for all PPT algorithms* A*.*

*(2) We say that the* SXDH assumption *holds with respect to gp, if the above holds for both* = 1 *and* = 2*.*

¹ N.b., we do not require *gp* to be efficiently invertible.

The ′ -DDHI assumption (Decisional Diffie-Hellman Inversion assumption) states that no efficient adversary can distinguish 1/ 1 from a random group element after having seen ′ consecutive group elements for increasing powers of .

**Definition 6.7 (** ′ **-DDHI Assumption)** *Let* <sup>1</sup> *be a prime-order group with* ∈ (2 ) *and generator* <sup>1</sup> *. We say that the* ′ *-*DDHI assumption *holds with respect to* <sup>1</sup> *if the advantage* AdvDDHI 1 ,′ ,A(1 ) *defined by*

$$\left| \Pr \left[ b = b' \left| \begin{array}{c} \mathbf{x} \xleftarrow{\mathcal{R}} \mathbb{Z}\_{\mathfrak{p}}; h\_{0} := \operatorname{g}\_{1}^{1/\mathbf{x}}; h\_{1} \xleftarrow{\mathcal{R}} G\_{1} \\\\ b \xleftarrow{\mathcal{R}} \{0, 1\} \\\\ b' \leftarrow \mathcal{H} \{1^{\mathbb{R}}, G\_{1}, \{\mathfrak{g}\_{1}, \operatorname{g}\_{1}^{\mathbf{x}}, \operatorname{g}\_{1}^{\mathbf{x}^{2}}, \dots, \operatorname{g}\_{1}^{\mathbf{x}^{\mathbf{x}'}}\}; h\_{b} \} \end{array} \right| - \frac{1}{2} \right| \tag{6.6}$$

*is negligible in for all PPT algorithms* A*.*

The co-DLIN assumption is defined as follows:

**Definition 6.8 (co-DLIN Assumption)** *Let gp* ≔ (<sup>1</sup> , <sup>2</sup> , T, , , <sup>1</sup> , <sup>2</sup> ) ← Setup(1 )*. We say that the* co-DLIN assumption *holds with respect to gp, if the advantage* AdvCO−DLIN *gp*,<sup>A</sup> (1 ) *defined by*

$$\Pr\left[b = b' \mid \begin{array}{c} \alpha, \beta, \chi \stackrel{\mathbb{R}}{\leftarrow} \mathbb{Z}\_{\mathbb{P}} \\\\ b \stackrel{\mathbb{R}}{\leftarrow} \{0, 1\} \\\\ \hat{h}\_{1} := \mathcal{g}\_{2}^{a}, \hat{h}\_{2} := \mathcal{g}\_{1}^{\beta}, \hat{h}\_{3} := \mathcal{g}\_{1}^{a + \beta + b\gamma} \\\\ b' \leftarrow \mathcal{g}\_{2}^{a}, \hat{h}\_{2} := \mathcal{g}\_{2}^{\beta}, \hat{h}\_{3} := \mathcal{g}\_{2}^{a + \beta + b\gamma} \\\\ b' \leftarrow \mathcal{H}(1^{n}, \text{g}\rho, \acute{h}\_{1}, \acute{h}\_{2}, \acute{h}\_{3}, \hat{h}\_{1}, \hat{h}\_{2}, \hat{h}\_{3}) \end{array} \right] \tag{6.7}$$

*is negligible in for all PPT algorithms* A*.*

Our construction relies on the co-CDH assumption and the security of our building blocks (cp. Section 6.2). For our special instantiation of the building blocks, security holds under the SXDH and co-DLIN assumption. The former implies the co-CDH assumption.

## **6.2 Cryptographic Building Blocks**

Our semi-generic construction makes use of various cryptographic primitives including (*gp*extractable) NIZK proofs, equivocal and extractable homomorphic commitments, digital signatures, public-key encryption, symmetric encryption and pseudo-random functions. All building blocks are aligned to a bilinear group setting in the type 3 case, i.e. they do not require an efficiently computable homomorphism between the involved groups. On the contrary, the

security of their instantiation relies on the fact that such an homomorphism does not exist. In the following, *gp* ≔ (<sup>1</sup> , <sup>2</sup> , T, , , <sup>1</sup> , <sup>2</sup> ) ← Setup(1 ) denotes a suitable bilinear group description (cp. Definition 6.2).

Additionally, the latter building blocks need to be efficiently and securely combinable with the chosen NIZK proof system, which is Groth-Sahai (GS) in our case. In the following, we formally define these building blocks and describe possible instantiations.

#### **6.2.1 Non-Interactive Zero-Knowledge Proofs**

Let *gp* be a witness relation for some NP language

$$L\_{\rm gp} = \{ \text{stmt} \, \Big| \, \exists \text{ wit s.t. (stmnt, wit)} \in R\_{\rm gp} \}. \tag{6.8}$$

A zero-knowledge proof allows a prover P to convince a verifier V that some *stmnt* is contained in *gp* without V learning anything beyond that fact. In a non-interactive zero-knowledge proof (NIZK), only one message, namely the proof , is sent from P to V for that purpose.

More precisely, a (group-based) NIZK scheme is defined as:

**Definition 6.9 (Non-Interactive Zero-Knowledge Proof Scheme)** *Let gp* ≔ (<sup>1</sup> , <sup>2</sup> , T, , , <sup>1</sup> , <sup>2</sup> ) ← Setup(1 ) *be as usual and gp the projection as in Definition 6.4. Let gp be an efficiently verifiable relation containing tuples* (*stmnt*,*wit*)*. We call stmnt the statement, and wit the witness. Let gp be the language containing all statements stmnt such that* (*stmnt*,*wit*) ∈ *gp. Let* POK ≔ (Setup, Prove,Vfy) *be a tuple of PPT algorithms such that*


POK *is called a NIZK proof scheme for gp with gp-extractability, if the following properties are satisfied:*


$$\Pr\left[0 \leftarrow \mathsf{Vfy}(\mathsf{crs}\_{\mathsf{pok}}, \mathsf{stmnt}, \pi) \left| \begin{matrix} \mathsf{crs}\_{\mathsf{pok}} \leftarrow \mathsf{Setup}(\mathsf{g}p) \\ (\mathsf{stmnt}, \pi) \leftarrow \mathsf{F}(\mathsf{crs}\_{\mathsf{pok}}) \\ \mathsf{stmnt} \notin L\_{\mathsf{gp}} \end{matrix} \right|\right] \tag{6.9}$$

111

*is* 1*.*

	- *(a) we have that the advantage* Advpok−setup−ext POK,A (*gp*) *defined by*

$$\begin{aligned} \left| \Pr \left[ 1 \leftarrow \mathcal{H}(\text{crs}\_{\text{pok}}) \Big| \, \text{crs}\_{\text{pok}} \leftarrow \text{Setup}(\text{gp}) \right] \right| \\ &- \Pr \left[ 1 \leftarrow \mathcal{H}(\text{crs}'\_{\text{pok}}) \Big| \, \left( \text{crs}'\_{\text{pok}}, \text{td}\_{\text{epok}} \right) \leftarrow \text{Setup} \text{Ext}(\text{gp}) \right] \Big| \quad \text{(6.no)} \end{aligned} $$

*is zero.*

*(b) we have that the probability* Succspok−ext POK,A (*gp*) *of a successful extraction defined by*

$$\Pr\begin{bmatrix} & & & (\mathit{crs}'\_{\text{pok}}, \mathit{td}\_{\text{epok}}) \leftarrow \text{Setup} \text{Ext}(\text{gp})\\ \exists \text{ wit } : & F\_{\text{gp}}(\text{wit}) = \mathit{Wit} \wedge & & (\mathit{stmnt}, \pi) \leftarrow \mathcal{R}(\text{crs}'\_{\text{pok}}, \mathit{td}\_{\text{epok}})\\ & (\mathit{stmnt}, \mathit{wit}) \in R\_{\text{gp}} & & 1 \leftarrow \mathsf{Vty}(\mathit{crs}'\_{\text{pok}}, \mathit{stmnt}, \pi)\\ & & & & \mathit{Wit} \leftarrow \mathsf{Extract}(\text{crs}'\_{\text{pok}}, \mathit{td}\_{\text{epok}}, \mathit{stmnt}, \pi) \end{bmatrix} \quad \text{(6.n)}$$

*is* 1*.*

	- *(a) we have that the advantage* Advpok−setup−sim POK,A (*gp*) *defined by*

$$\begin{aligned} \left| \Pr \left[ 1 \leftarrow \mathcal{H}(\text{crs}\_{\text{pok}}) \Big| \, \text{crs}\_{\text{pok}} \leftarrow \text{Setup}(\text{gp}) \right] \right| \\ &- \Pr \left[ 1 \leftarrow \mathcal{H}(\text{crs}\_{\text{pok}}') \Big| \, \left( \text{crs}\_{\text{pok}}', \text{td}\_{\text{spok}} \right) \leftarrow \text{SetupSim}(\text{gp}) \right] \Big| \quad \text{(6.12)} \end{aligned} $$

*is negligible in .²*

² N.b., the terms implicitly depend on , because *gp* ← Setup(1 ) does and we require log ∈ (1 ) for the group modulus

*(b) we have that the advantage* Advpok−zk POK,A (*gp*) *defined by*

$$\begin{aligned} \left| \Pr \left[ 1 \leftarrow \mathcal{H}^{\text{ProveSim}(\text{crr}'\_{\text{pok}}, \text{td}\_{\text{spok}},:)} (\text{crs}'\_{\text{pok}}, \text{td}\_{\text{spok}}) \middle| (\text{crs}'\_{\text{pok}}, \text{td}\_{\text{epok}}) \leftarrow \text{SetupEx}(\text{gp}) \right] \right| \\ - \left| \Pr \left[ 1 \leftarrow \mathcal{H}^{\text{Prove}(\text{crs}'\_{\text{pok}},:)} (\text{crs}'\_{\text{pok}}, \text{td}\_{\text{spok}}) \middle| (\text{crs}'\_{\text{pok}}, \text{td}\_{\text{epok}}) \leftarrow \text{SetupEx}(\text{gp}) \right] \right| \end{aligned} \tag{6.13}$$

*is negligible in . Here,* A *has oracle access either to* ProveSim(*crs*′ pok,*td*spok, ⋅, ⋅) *or* Prove(*crs*′ pok, ⋅, ⋅)*. Both* ProveSim *and* Prove *return* ⊥ *on input* (*stmnt*,*wit*) ∉ *gp.*

We wish to point out some remarks.

#### **Remark 6.10**


#### **Our Instantiation**

We choose the SXDH-based Groth-Sahai (GS) proof system [EG14; GS08] as our NIZK scheme. On the one hand, it allows for very efficient proofs under standard assumptions. On the other hand, GS comes with two drawbacks which makes using it sometimes pretty tricky:


We work around both issues by carefully choosing the remaining building blocks and the languages of NP-statements we need to prove. Also, in many places, the proof of security for our system does not require This holds indeed, if the co-domain of *gp* is restricted to the particular NP-language under consideration. a true proof of knowledge. The existence of a unique witness suffices. This holds indeed, if the co-domain of *gp* is restricted to the particular NP-language under consideration.

For the sake of completeness, we summarize what types of equations are supported by GS. In the following, let <sup>1</sup> , <sup>2</sup> , … ∈ <sup>1</sup> , <sup>1</sup> , <sup>2</sup> , … ∈ ℤ , <sup>1</sup> , <sup>2</sup> , … ∈ <sup>2</sup> , as well as <sup>1</sup> , <sup>2</sup> , … ∈ ℤ denote secret variables, i.e. the witnesses, and let , <sup>1</sup> , <sup>2</sup> , … ∈ <sup>1</sup> , <sup>1</sup> , <sup>2</sup> , … ∈ ℤ , , <sup>1</sup> , <sup>2</sup> , … ∈ <sup>2</sup> , 1 , 2 , … ∈ ℤ , ∈ <sup>T</sup> as well as , 1,1, 1,2, 2,1, … ℤ denote public constants.

• *Pairing-Product Equation (PPE):*

$$\prod\_{i} e\left(A\_{i}, Y\_{i}\right) \prod\_{j} e\left(X\_{j}, B\_{j}\right) \prod\_{i} \prod\_{j} \mathbf{e}\left(X\_{i}, Y\_{j}\right)^{\mathbf{Q}\_{i,j}} = \mathbf{C} \tag{6.14}$$

if there is a known decomposition for = ∏ (′ , ′ ) with public constants ′ 1 , ′ 2 , … ∈ <sup>1</sup> and ′ 1 , ′ 2 , … ∈ <sup>2</sup> .

• *Multi-Scalar Equation (MSE) over* <sup>1</sup> *:*

$$\prod\_{i} \mathbf{A}\_{i}^{\mathbf{x}\_{i}} \prod\_{j} \mathbf{X}\_{j}^{a\_{j}} \prod\_{i} \prod\_{j} \mathbf{X}\_{j}^{c\_{i,j} \mathbf{x}\_{j}} = A \tag{6.15}$$

• *Multi-Scalar Equation (MSE) over* <sup>2</sup> *:*

$$\prod\_{i} \, B\_i^{\mathbb{N}} \prod\_{j} Y\_j^{b\_j} \prod\_{i} \prod\_{j} Y\_j^{c\_{i, \mathcal{V}\_i}} = B \tag{6.16}$$

• *Quadratic Equation (QE) over* ℤ *:*

$$
\sum\_{i} a\_i \mathbf{x}\_i + \sum\_{j} b\_j \mathbf{y}\_j + \sum\_{i} \sum\_{j} c\_{i,j} \mathbf{x}\_i \mathbf{y}\_j = \mathbf{c} \tag{6.17}
$$

#### **6.2.2 Commitments**

A commitment scheme is a two-party protocol between a sender and a receiver. In the first phase—called the commit phase—the sender commits itself to a message such that the message remains hidden to the receiver. Later, in the second phase—called the unveil phase—the sender unveils the message to the receiver and the receiver is convinced that the sender has been bound to the original message and is unable to claim a different message. A commitment scheme is called non-interactive, if committing and unveiling only requires a single message from the sender to the receiver. Let *gp* ≔ (<sup>1</sup> , <sup>2</sup> , T, , , <sup>1</sup> , <sup>2</sup> ) ← Setup(1 ) be as usual and *gp* the projection as in Definition 6.4. A commitment scheme is called an (group-based) commitment scheme with *gp*-binding, if the sender commits to a message but unveils the commitment using *gp*(). We call the codomain of *gp* the implicit message space.

#### **Definition 6.11 ((Group-Based, Non-Interactive) Commitment Scheme)**

*A* (group-based) commitment scheme COM ≔ (Setup,Commit, Open) *with gp-binding consists of three algorithms:*


COM *is* correct *if* Open(*crs*com, *gp*(), , ) = 1 *holds for all crs*com ← Setup(*gp*)*,* ∈ M*, and* (, ) ← Commit(*crs*com, )*.*

*We say that* COM *is* hiding*, gp*-binding*,* equivocal *and* extractable*, if it the following properties hold:*

*(1)* Hiding: *For all PPT adversaries* A *it holds that the advantage* AdvHiding COM,A (*gp*) *defined by*

$$\left| \Pr \left[ b = b' \begin{array}{c} crs\_{\text{com}} \leftarrow \text{Setup}(gp) \\ (m\_0, m\_1, \text{state}) \leftarrow \mathcal{R}(crs\_{\text{com}}) \\\\ b \xleftarrow{\text{R}} \{0, 1\} \\ (m, d) \leftarrow \text{Commit}(crs\_{\text{com}}, m\_b) \\\\ b' \leftarrow \mathcal{R}(c, \text{state}) \end{array} \right] - \frac{1}{2} \right| \tag{6.18}$$

*is negligible in . The scheme is called* statistically hiding *if* AdvHiding COM,A (*gp*) *is negligible even for an unbounded adversary* A*.*

*(2) gp*-binding: *For all PPT adversaries* <sup>A</sup> *it holds that the advantage* Adv*gp*‐binding A (*gp*) *defined by*

$$\Pr\left[\begin{array}{c} \text{Open}(\text{crs}\_{\text{com}}, M, \text{c}, d) = 1 \land \\ \text{Open}(\text{crs}\_{\text{com}}, M', \text{c}, d') = 1 \land \\ \qquad M \neq M' \end{array} \mid \begin{array}{c} \text{crs}\_{\text{com}} \leftarrow \text{Setup}(\text{g}p) \\\\ \text{(c}, M, d, M', d') \leftarrow \mathcal{R}(1^{n}, \text{crs}\_{\text{com}}) \end{array} \right] \tag{6.19}$$

*is negligible in .*

*(3)* Equivocal: *There exist PPT algorithms* SetupSim*,* CommitSim *and* Equivoke *such that for all PPT adversaries* A

*(a) we have that the advantage* AdvSetupSim COM,A (*gp*) *defined by*

$$\begin{aligned} \left| \Pr \left[ 1 \leftarrow \mathcal{A} (\text{crs}\_{\text{com}}) \Big| \, \text{crs}\_{\text{com}} \leftarrow \text{Setup}(\text{gp}) \right] \right| \\ &- \Pr \left[ 1 \leftarrow \mathcal{A} (\text{crs}'\_{\text{com}}) \Big| \, (\text{crs}'\_{\text{com}}, \, t d\_{\text{eq}\text{com}}) \leftarrow \text{SetupSim}(\text{gp}) \right] \Big| \quad \text{(6.2o)} \end{aligned} $$

*is negligible in .*

*(b) we have that the advantage* AdvEquiv COM,A (*gp*) *defined by*

$$\left| \Pr \left[ 1 \leftarrow \mathcal{R} \binom{\text{crs}\_{\text{com}}', td\_{\text{eq}\text{com}}}{m, c, d} \right] \right| \quad \begin{aligned} &(\text{crs}\_{\text{com}}', td\_{\text{eq}\text{com}}) \leftarrow \text{SetupSim}(gp), \\ &m \leftarrow \mathcal{M}, \\ &(c, d) \leftarrow \text{Commit}(\text{crs}\_{\text{com}}', m) \end{aligned} \right| $$

$$ -\Pr \left[ 1 \leftarrow \mathcal{R} \binom{\text{crs}\_{\text{com}}', td\_{\text{eq}\text{com}}}{m, c', d'} \right] \quad \begin{aligned} &(\text{crs}\_{\text{com}}', td\_{\text{eq}\text{com}}) \leftarrow \text{SetupSim}(gp), \\ &m \leftarrow \mathcal{M}, \\ &m \leftarrow \mathcal{M}, \\ &d' \leftarrow \text{Equivole}(\text{crs}\_{\text{com}}', td\_{\text{eq}\text{com}}, m, r) \end{aligned} \right] \quad \text{(6.21)}$$

*is zero.*

*(4) gp*-Extractable: *There exist PPT algorithms* SetupExt *and* Extract *such that for all PPT adversaries* A

*(a) we have that the advantage* AdvSetupExt COM,A (*gp*) *defined by*

$$\begin{aligned} \left| \Pr[1 \leftarrow \mathcal{H}(\text{crs}\_{\text{com}}) \, \Big| \, \text{crs}\_{\text{com}} \leftarrow \text{Setup}(\text{gp})] \right| \\ &- \Pr[1 \leftarrow \mathcal{H}(\text{crs}'\_{\text{com}}) \, \Big| \, (\text{crs}'\_{\text{com}}, \, t d\_{\text{extcomp}}) \leftarrow \text{Setup} \mathsf{Ext}(\text{gp})] \Big| \, \text{ (6.22)} \end{aligned} $$

*is negligible in .*

*(b) we have that the advantage* AdvExt COM,A() *defined by*

$$\Pr\begin{bmatrix} \exists \, m, r \colon \, \mathcal{c} = \mathsf{Commit}(\mathit{crs}'\_{\mathit{com}}, m; r) \land \, \begin{bmatrix} (\mathit{crs}'\_{\mathit{com}}, \mathit{td}\_{\mathit{extcom}}) \leftarrow \mathit{Setup} \mathsf{Ext}(\mathit{g}p),\\ \mathit{crs}'\_{\mathit{com}} \, \mathit{td}\_{\mathit{extcom}}, \mathcal{c} \end{bmatrix} \end{bmatrix} \begin{bmatrix} \mathit{crs}'\_{\mathit{com}}, \mathit{td}\_{\mathit{extcom}} \end{bmatrix} \leftarrow \begin{cases} \mathit{Setup} \mathsf{Ext}(\mathit{g}p),\\ \mathit{crs} \leftarrow \mathcal{R}(\mathit{crs}'\_{\mathit{com}}) \end{cases} \tag{6.23}$$

*is zero.*

*Furthermore, assume that the message space of* COM *is an additive group. Then* COM *is called* additively homomorphic*, if there exist additional PPT algorithms* ← AddC(*crs*com, <sup>1</sup> , 2 ) *and*

 ← AddD(*crs*com, <sup>1</sup> , <sup>2</sup> ) *which on input of two commitments and corresponding decommitment values* (<sup>1</sup> , <sup>1</sup> ) ← Commit(*crs*com, <sup>1</sup> ) *and* (<sup>2</sup> , <sup>2</sup> ) ← Commit(*crs*com, <sup>2</sup> )*, output a commitment and decommitment , respectively, such that* Open(*crs*com, , *gp*(<sup>1</sup> + <sup>2</sup> ), ) = 1*.*

*Finally, we call* COM opening complete *if for all* ∈ M′ *and arbitrary values , with* Open(*crs*com, , , ) = 1 *holds that there exists* ∈ M *and randomness such that* (, ) ← Commit(*crs*com, ; )*.*

#### **Our Instantiation**

We will make use of two commitment schemes that are both based on the SXDH assumption. We first use the shrinking -message-commitment scheme from Abe et al. [Abe+15]. This commitment scheme has message space ℤ , commitment space <sup>2</sup> and opening value space <sup>1</sup> . It is statistically hiding, additively homomorphic, equivocal, and ′ *gp*-Binding, for ′ *gp*(<sup>1</sup> , … , ) ≔ (<sup>1</sup> 1 , … , 1 ). We use this commitment scheme as C1 with CRS *crs* (1) com and C2 with CRS *crs* (2) com in the following ways in our system:


We also use the (dual-mode) equivocal and extractable commitment scheme from Groth and Sahai [GS08]. This commitment scheme has message space <sup>1</sup> , commitment space 2 1 and opening value space ℤ<sup>2</sup> . It is equivocal, extractable, hiding and ′ *gp*-Binding for ′ *gp*() ≔ . In our system, we use this commitment scheme as C4 with CRS *crs* (4) com in IssueWallet and Deposit.

#### **6.2.3 Digital Signatures**

A signature allows a signer to issue a signature on a message using its secret signing key *sk* such that anybody can publicly verify that is a valid signature for using the public verification key *pk* of the signer but nobody can feasibly forge a signature without knowing *sk*. Again, we only consider a group-based setting. The standard definition of signature scheme security, EUF-CMA, has been introduced by Goldwasser, Micali, and Rivest [GMR88].

**Definition 6.12 ((Group-Based) Signature Scheme)** *A* digital signature scheme SIG ≔ (Gen, Sign,Vfy) *consists of three PPT algorithms:*

*•* Gen *takes gp as input and outputs a key pair* (*pk*,*sk*)*. The public key and gp define a message space* M*.*


*We call* SIG *correct if for all* ∈ M*,* (*pk*,*sk*) ← Gen(*gp*)*,* ← Sign(*sk*, ) *we have* 1 ← Vfy(*pk*, , )*.*

*We say that* SIG *is EUF-CMA secure if for all PPT adversaries* A *it holds that the advantage* AdvEUF−CMA SIG,<sup>A</sup> (1 ) *defined by*

$$\Pr\left[\forall \mathbf{f} \circ (\rho k, \sigma^\*, m^\*) = 1 \begin{array}{c} \mathbf{g}\mathbf{p} \leftarrow \mathsf{Setup}(\mathbf{1}^n) \\\\ (\rho k, sk) \leftarrow \mathsf{Gen}(\mathbf{g}\mathbf{p}) \\\\ (m^\*, \sigma^\*) \leftarrow \mathcal{A}^{\mathsf{Sign}(sk, \cdot)}(\mathbf{1}^n, \rho k) \\\\ m^\* \notin \{m\_1, \dots, m\_q\} \end{array} \right] \tag{6.24}$$

*is negligible in , where* Sign(*sk*, ⋅) *is an oracle that, on input , returns* Sign(*sk*, )*, and* {<sup>1</sup> , … , } *denotes the set of messages queried by* A *to its oracle.*

#### **Our Instantiation**

As we need to prove statements about signatures, the signature scheme has to be algebraic. For our construction, we use the structure-preserving signature scheme of Abe et al. [Abe+11], which is currently the most efficient structure-preserving signature scheme. Its EUF-CMA security proof is in the generic group model, a restriction we consider reasonable with respect to our goal of constructing a highly efficient BBA+ scheme. An alternative secure in the plain model would be [KPW15]. For the scheme in [Abe+11], one needs to fix two additional parameters , ∈ ℕ<sup>0</sup> defining the actual message space 1 × 2 . Then *sk* ∈ ℤ++2 , *pk* ∈ +2 1 × 2 and ∈ <sup>2</sup> 2 × <sup>1</sup> .

We use the signature scheme SIG from Abe et al. [Abe+11] in the following ways in our system:


#### **6.2.4 Asymmetric Encryption**

We use the standard definitions for asymmetric encryption schemes and corresponding security notions, except that the algorithms take *gp* ≔ (<sup>1</sup> , <sup>2</sup> , T, , , <sup>1</sup> , <sup>2</sup> ) ← Setup(1 ) as an additional parameter to fit our algebraic setting.

**Definition 6.13 (Asymmetric Encryption)** *An* asymmetric encryption scheme ENC ≔ (Gen, Enc, Dec) *consists of three PPT algorithms:*


*Correctness is defined in the usual sense.*

*An asymmetric encryption scheme* ENC *is* IND-CCA2*-secure if for all PPT adversaries* A *it holds that the advantage* AdvIND−CCA−asym ENC,A (*gp*) *defined by*

$$\left| \Pr \left[ b = b' \middle| \begin{array}{c} (pk, sk) \leftarrow \mathsf{Gen}(gp) \\\\ (state, m\_0, m\_1) \leftarrow \mathcal{A}^{\mathsf{Dec}(sk, \cdot)}(1^n, pk) \\\\ b \xleftarrow{\mathsf{R}} \{0, 1\} \\\\ c^\* \leftarrow \mathsf{Enc}(pk, m\_b) \\\\ b' \leftarrow \mathcal{A}^{\mathsf{Dec'}(sk, \cdot)}(state, c^\*) \end{array} \right] - \frac{1}{2} \right| \tag{6.25}$$

*is negligible in , with* |0| = |1|*,* Dec(*sk*, ⋅) *being an oracle that gets a ciphertext from the adversary and returns* Dec(*sk*, ) *and* Dec′ (*sk*, ⋅) *being the same, except that it returns* ⊥ *on input* ∗ *.*

*An asymmetric encryption scheme* ENC *is* NM-CCA2*-secure if for all PPT adversaries* A *it holds that the advantage* AdvNM−CCA ENC,<sup>A</sup> (1 ) *defined by*

$$\left| \text{Success}\_{\text{ENC}, \mathcal{R}, \text{real}}^{\text{NM-CCA}} (1^n) - \text{Success}\_{\text{ENC}, \mathcal{R}, \text{random}}^{\text{NM-CCA}} (1^n) \right| \tag{6.26}$$

*is negligible with*

$$\text{Success}^{\text{NM}-\text{CCA}}\_{\text{ENC},\text{R},\text{real}}(1^{n}) \coloneqq \text{Pr}\begin{bmatrix} (\pm k, sk) \leftarrow \text{Gen}(\text{gp})\\\\ c \notin \text{c} \land\\ \bot \notin \text{m} \land\\ R(m, \mathfrak{m}) = 1 \\\\ (\mathcal{R}, \mathsf{c}) \leftarrow \mathcal{P}^{\text{Dec}}(sk, m) \\\\ (\mathcal{R}, \mathsf{c}) \leftarrow \mathcal{P}^{\text{Dec}'(sk, \cdot)}(1^{n}, \text{state}, \mathsf{c}) \\\\ \mathsf{m} \leftarrow \text{Dec}(sk, \mathsf{c}) \end{bmatrix} \tag{6.27}$$

*and*

$$\text{Success}^{\text{NM-CCA}}\_{\text{ENC},\mathcal{R},\text{random}}(1^{n}) \coloneqq \Pr\begin{bmatrix} (\rho k, sk) \leftarrow \text{Gen}(\text{gp})\\\\ (M, \text{state}) \leftarrow \mathcal{R}^{\text{Dec}(sk, \cdot)}(1^{n}, pk) \\\\ \bot \notin \mathcal{m} \land \\ R(\widetilde{m}, m) = 1 \\\\ (R, c) \leftarrow \mathcal{P}^{\text{Dec}'(sk, \cdot)}(1^{n}, \text{state}, c) \\\\ m \leftarrow \text{Dec}(sk, c) \end{bmatrix},\tag{6.28}$$

*where denotes a space of valid, equally long messages,* ⊆ ×<sup>∗</sup> *denotes an relation,* Dec(*sk*, ⋅) *is an oracle that gets a ciphertext from the adversary and returns* Dec(*sk*, ) *and* Dec′ (*sk*, ⋅) *is the same, except that it returns* ⊥ *on input .*

*An encryption is IND-CCA2 secure if and only if it is NM-CCA2 secure [Bel+98].*

#### **Our Instantiation**

We will make use of two different IND-CCA2-secure encryption schemes:


The scheme by Cash, Kiltz, and Shoup [CKS08] is based on the twin-DH assumption. For efficiency reasons we utilize the typical hybrid approach and use the asymmetric scheme to

setup a session key for a symmetric encryption of messages following the KEM/DEM pattern (cp. Section 6.2.5). Note that we don't require any algebraic properties, especially we don't need to prove anything about ciphertexts. For the ease of presentation, we act as if the message space of ENC2 was <sup>1</sup> × <sup>1</sup> × ℤ × (<sup>2</sup> 1 × <sup>3</sup> 2 ) × (<sup>2</sup> 2 × <sup>1</sup> ), because this is the space of the recalculation tags rc. However, the encryption scheme Cash, Kiltz, and Shoup [CKS08] does not depend on this, but treats plain messages and ciphertexts as opaque bit strings.

The scheme for ENC1 is an adapted variant of the structure-preserving, IND-CCA2 secure encryption scheme by Camenisch et al. [Cam+11]. Thus, some remarks are in order. The original scheme is formalized for a group setting of type 1, but we need a scheme that is secure in the asymmetric type 3 setting. For the conversion we followed the generic transformation proposed by Abe et al. [Abe+14] with some additional, manual optimizations. The transformed scheme encrypts vectors of <sup>1</sup> -elements and is secure under the co-DLIN assumption (cp. Definition 6.8) which holds in the generic group model. This follows automatically from [Abe+14] (or can also be easily seen by inspecting the original proof in [Cam+11]). We present the modified scheme in full detail.

**Definition 6.14 (Type 3 Variant of Camenisch et al. [Cam+11])** *Let gp* ≔ (<sup>1</sup> , <sup>2</sup> , T, , , 1 , <sup>2</sup> ) ← Setup(1 ) *be as usual. Let* ℘ *be the dimension of the message space* ℘ 1 *. The algorithms* Gen*,* Enc *and* Dec *are depicted in Figs. 6.1 to 6.3.*

We instantiate this scheme with ℘ = ℓ + 2.

#### **6.2.5 Symmetric Encryption**

We use standard definitions for symmetric encryption schemes and corresponding security notions.

**Definition 6.15 (Symmetric Encryption)** *A* symmetric encryption scheme ENC ≔ (Gen, Enc, Dec) *consists of three PPT algorithms:*


As for asymmetric encryption, we require correctness in the usual sense.

We now define a multi-message version of IND-CCA2 security. It is a well-known fact that IND-CCA2 security in the multi-message setting is equivalent to standard IND-CCA2 security. (This can be shown via a standard hybrid argument.)

Gen(*gp*, ℘) parse (<sup>1</sup> , <sup>2</sup> , <sup>T</sup> , , , <sup>1</sup> , <sup>2</sup> ) ≔ *gp* 1 , … , ℘, <sup>0</sup> , … , <sup>3</sup> , 1 , … , <sup>℘</sup> <sup>R</sup>← ℤ<sup>3</sup> *sk* ≔ ({ }=1,…,℘, { }=0,…,3, { }=1,…,℘) 1 , … , <sup>3</sup> <sup>R</sup>← ℤ<sup>∗</sup> ̌ ℎ<sup>1</sup> ≔ 1 1 , ̌ ℎ<sup>2</sup> ≔ 2 1 , ̌ ℎ<sup>3</sup> ≔ 3 1 ̂ ℎ<sup>1</sup> ≔ 1 2 , ̂ ℎ<sup>2</sup> ≔ 2 2 , ̂ ℎ<sup>3</sup> ≔ 3 2 ,1 ≔ ̌ ℎ ,1 1 ̌ ℎ ,3 3 , ,2 ≔ ̌ ℎ ,2 2 ̌ ℎ ,3 3 , for = 1, … , ℘ ,1 ≔ ̂ ℎ ,1 1 ̂ ℎ ,3 3 , ,2 ≔ ̂ ℎ ,2 2 ̂ ℎ ,3 3 , for = 0, … , 3 ,1 ≔ ̂ ℎ ,1 1 ̂ ℎ ,3 3 , ,2 ≔ ̂ ℎ ,2 2 ̂ ℎ ,3 3 , for = 1, … , ℘ *pk* ≔ ( ̌ ℎ1 , ̌ ℎ2 , ̌ ℎ3 , ̂ ℎ1 , ̂ ℎ2 , ̂ ℎ3 , {,1, ,2}=1,…,℘, {,1, ,2}=0,…,3, {,1, ,2}=1,…,℘) **return** (*pk*,*sk*)

Figure 6.1: The key generation algorithm Gen of the adapted CCA-secure encryption scheme by Camenisch et al. [Cam+11] with parameter ℘ and message space ℘ 1

$$\begin{aligned} &\mathbf{Enc}(pk, m) \\ &\text{parse }(\hat{h}\_{1}, \hat{h}\_{2}, \hat{h}\_{3}, \hat{h}\_{1}, \hat{h}\_{2}, \hat{h}\_{3}, \{\mathbf{x}\_{i,1}, \mathbf{x}\_{i,2}\}\_{i=1,\ldots,\wp}, \\ &\{y\_{1,1}, y\_{1,2}\}\_{i=0,\ldots,3}, \{\mathbf{z}\_{i,1}, \mathbf{z}\_{i,2}\}\_{i=1,\ldots,\wp}) := pk \\ &\hat{u}\_{1} := \dot{h}\_{1}^{r} \qquad \dot{u}\_{2} := \dot{h}\_{2}^{s} \qquad \dot{u}\_{3} := \dot{h}\_{3}^{r+s} \\ &\hat{u}\_{1} := \dot{h}\_{1}^{r} \qquad \dot{u}\_{2} := \dot{h}\_{2}^{s} \qquad \dot{u}\_{3} := \dot{h}\_{3}^{r+s} \\ &\quad c = m\_{i}^{s}, \mathbf{x}\_{i,1}^{s}{\boldsymbol{\omega}}\_{i2} \text{ for } i = 1,\ldots,\wp \\ &\nu = \prod\_{l=0}^{3} e\left(\dot{u}\_{l}, y\_{l,1}^{r}{\boldsymbol{\upnu}}\_{i,2}\right) \prod\_{l=1}^{\wp} e\left(c\_{l}, z\_{i,1}^{r} z\_{i,2}^{s}\right) \text{ with } \ddot{u}\_{0} := g\_{1} \\ &c := (u, c, v) \text{ with } u := (\ddot{u}\_{1}, \ddot{u}\_{2}, \ddot{u}\_{3}, \hat{u}\_{1}, \hat{u}\_{2}, \hat{u}\_{3}) \text{ and } c := (c\_{1}, \ldots, c\_{p}) \\ &\textbf{return (c)} \end{aligned}$$

Figure 6.2: The encryption algorithm Enc of the adapted CCA-secure encryption scheme by Camenisch et al. [Cam+11] with parameter ℘ and message space ℘ 1

Dec(*sk*,) parse ({ }=1,…,℘, { }=0,…,3, { }=1,…,℘) ≔ *sk* parse (, , ) ≔ , ( ̌<sup>1</sup> , ̌<sup>2</sup> , ̌<sup>3</sup> , ̂<sup>1</sup> , ̂<sup>2</sup> , ̂<sup>3</sup> ) ≔ and (<sup>1</sup> , … , ℘) ≔ 0 ̌ ≔ <sup>1</sup> **if** ≠ 3 ∏=0 ( ̌ , ̂,1 1 ̂ ,2 2 ̂ ,3 <sup>3</sup> ) ℘ ∏=1 ( , ̂,1 1 ̂ ,2 2 ̂ ,3 <sup>3</sup> ) abort **if** ( ̌ , 2) ≠ (<sup>1</sup> , ̂) for any ∈ {1, 2, 3} abort ≔ ̌ −,1 1 ̌ −,2 2 ̌ −,3 3 for ∈ {1, … , ℘} ≔ (<sup>1</sup> , … , ℘) **return** ()

Figure 6.3: The decryption algorithm Dec of the adapted CCA-secure encryption scheme by Camenisch et al. [Cam+11] with parameter ℘ and message space ℘ 1

**Definition 6.16 (IND-CCA2-Security for Symmetric Encryption)** *A symmetric encryption scheme* ENC *is* IND-CCA2*-secure if for all PPT adversaries* A *it holds that the advantage* AdvIND−CCA−sym ENC,A (1 ) *defined by*

$$\left| \Pr \left[ b = b' \begin{array}{c} sk \leftarrow \mathsf{Gen}(1^{n}) \\\\ (state, j, m\_{0}, m\_{1}) \leftarrow \mathcal{F}^{\mathsf{Enc}(sk, \cdot), \mathsf{Dec}(sk, \cdot)}(1^{n}) \\\\ b \leftarrow \{0, 1\} \\\\ \mathsf{c}^{\*} \leftarrow \left( \mathsf{Enc}(sk, m\_{b, 1}), \ldots, \mathsf{Enc}(sk, m\_{b, j}) \right) \\\\ b' \leftarrow \mathcal{F}^{\mathsf{Enc}(sk, \cdot), \mathsf{Dec}'(sk, \cdot)}(\mathsf{state}, \mathsf{c}^{\*}) \end{array} \right| - \frac{1}{2} \right| \tag{6.29}$$

*is negligible in , where* , *are two vectors of* ∈ ℕ *bit strings each such that for all* 1 ≤ ≤ *:* |0,| = |1,|*,* Enc(*sk*, ⋅) *and* Dec(*sk*, ⋅) *denote oracles that return* Enc(*sk*, ) *and* Dec(*sk*, ) *for a or chosen by the adversary, and* Dec′ (*sk*, ⋅) *is the same as* Dec(*sk*, ⋅)*, except that it returns* ⊥ *on input of any* ∗ *that is contained in* ∗ *.*

#### **Our Instantiation**

We use an IND-CCA2-secure symmetric encryption scheme in our protocol to encrypt the exchanged protocol messages. To this end, we combine an IND-CCA2-secure asymmetric encryption (see Section 6.2.4) with an IND-CCA2-secure symmetric encryption in the usual KEM/DEM approach. The symmetric encryption can for example be instantiated with AES in CBC mode together with HMAC based on the SHA-256 hash function. The result will be IND-CCA2-secure if AES is a pseudo-random permutation and the SHA-256 compression function is a PRF when the data input is seen as the key [Bel15].

### **6.2.6 Pseudo-Random Functions**

A pseudo-random function (PRF)—more precisely a family of PRF's indexed in the seed —is a function F ∶ L × X → Y, (, ) ↦ that for a randomly chosen, but constant seed is computationally indistinguishable from a randomly chosen function ∶ X → Y. In other words, any PPT adversary given oracle access to either F(, ⋅) or (⋅) cannot distinguish between them with non-negligible probability. Formally, a PRF is defined as follows.

**Definition 6.17 ((Group-Based) Pseudo-Random Function)** *Let gp* ≔ (<sup>1</sup> , <sup>2</sup> , T, , , <sup>1</sup> , 2 ) ← Setup(1 ) *be as usual. The key space* L*, the domain* X *and the co-domain* Y *may all depend on gp. A (group-based)* pseudo-random function *(PRF)* PRF ≔ (Gen, Eval) *consists of two PPT algorithms:*


*We say that* PRF *is secure if for all PPT adversaries* <sup>A</sup> *it holds that the advantage* Advprf A (1 ) *defined by*

$$\left| \Pr[1 \leftarrow \mathcal{H}^{\mathsf{PRF}(\lambda, \cdot)}(\mathsf{gp}) \Big| \, \lambda \leftarrow \mathsf{Gen}(\mathsf{gp})] - \Pr[1 \leftarrow \mathcal{H}^{\mathsf{R}(\cdot)}(\mathsf{gp}) \Big| \, R \overset{\mathsf{R}}{\leftarrow} \{R : \lambda \rightarrow \mathcal{H} \}] \right| \qquad (\mathsf{6.30})$$

*is negligible in .*

#### **Our Instantiation**

As we want to efficiently prove statements about PRF outputs, we use an efficient algebraic construction, namely the Dodis-Yampolskiy PRF [DY04]. This function is defined by PRF(, ) ∶ ℤ2 → <sup>1</sup> , (, ) ↦ 1 + 1 , where <sup>R</sup>← ℤ is the random PRF seed. It is secure for inputs {0, … , PRF} ⊂ ℤ under the PRF-DDHI assumption. This is a family of increasingly stronger assumptions which is assumed to hold for asymmetric bilinear groups.

#### **6.2.7 Range Proofs**

A range proof is not a real building block on its own, but rather a clever combination of a zero-knowledge scheme with a signature scheme. Nonetheless, in the rest of the thesis we treat range proofs as if there were building blocks and therefore present their construction in this chapter. A range proof asserts in zero-knowledge that some secret ℤ -element is contained within the range {<sup>l</sup> , … , u}. Clearly, such a statement only makes sense, if one pins down a fixed representation of ℤ , reinterprets ℤ -elements as ordinary ℤ-elements and then resorts to the normal ≤-relation over the integers. For the ease of presentation, we fix the representation ℤ ≜ {0, … , − 1} ⊂ ℤ (see also Definitions 2.2 and 2.3). We need range proofs for two purposes:


We realize range proofs using Groth-Sahai by applying the signature-based technique of Camenisch, Chaabouni, and shelat [CCs08].

#### **The Trivial Approach**

Firstly, we recap the trivial approach to prove that a secret is within the range {<sup>l</sup> , … , u}. The verifier creates a signature for every element in {<sup>l</sup> , … , u} under his secret key and publishes these signatures. Then, the prover proves in zero-knowledge that he knows a signature for his secret value. Obviously, this approach only works if the range {<sup>l</sup> , … , u} is small to keep the number of signatures limited. If the size of the range is proportional to size of the underlying group ℤ , this method requires exponentially many signatures as log ∈ () holds.

#### **An Approach for Aligned Intervals**

Camenisch, Chaabouni, and shelat [CCs08] exploit a -ary representation of the secret with at most max positions to overcome this problem. For a fixed base

$$2 \le q \le \mathfrak{p} - 1 \tag{6.31}$$

the maximal admissible number of positions max is

$$\eta\_{\text{max}} := \lfloor \log\_q \mathfrak{p} \rfloor. \tag{6.32}$$

and the largest integer that can be represented equals

$$N\_{\text{max}} := q^{\eta\_{\text{max}}} - 1.\tag{6.33}$$

The base ∈ ℕ and the maximum number of positions max ∈ ℕ are public design parameters and put into the CRS. Also, the verifier creates a signature for each of the possible digits {0, … , − 1} in advance and publishes these signatures. For the actual range proof, the verifier and prover first agree on the number of positions ∈ {1, … , max} they want to use. Then the prover secretly computes a representation = ∑ −1 =0 with ∈ {0, … , − 1}. The prover shows in zero-knowledge that equality holds and that he knows a signature for each , i.e. that each is indeed a valid digit in {0, … , − 1}. The actual value of each digit remains secret and the verifier only learns that can be represented with ≤ max positions. Hence, the approach is only applicable to intervals whose upper limits is aligned to powers of .

#### **The General Case**

In order to prove membership in an arbitrary interval ∈ {<sup>l</sup> , … , u} whose limits are not aligned to -powers, the prover shifts the secret by a pertinent offset and then conducts two basic range proofs for two intervals that have properly aligned boundaries and whose intersection (after reverting the shifting) equals the original interval.

Let ∈ {1, … , max} be defined as

$$\eta := \lfloor \log\_q(m\_\mathbf{u} - m\_\mathbf{l}) \rfloor + 1 \quad \Leftrightarrow \quad q^{\eta - 1} \le m\_\mathbf{u} - m\_\mathbf{l} < q^\eta \tag{6.34}$$

and define an offset value as

$$o := q^{\eta} - 1,\tag{6.35}$$

i.e. + 1 is the smallest -power larger than the length <sup>u</sup> − <sup>l</sup> of the interval. Please note, that , max and the boundaries of the interval <sup>l</sup> , <sup>u</sup> are known by both the verifier and the prover. Hence, the number of needed positions and the offset value are public, too. It follows

$$\begin{aligned} & \mathbf{w} \in \{m\_{\mathbf{l}}, \dots, m\_{\mathbf{u}}\} \\ & \Leftrightarrow \mathbf{w} - m\_{\mathbf{l}} \in \{0, \dots, m\_{\mathbf{u}} - m\_{\mathbf{l}}\} \\ & \Leftrightarrow \mathbf{w} - m\_{\mathbf{l}} \in \{0, \dots, o\} \cap \{m\_{\mathbf{u}} - m\_{\mathbf{l}} - o, \dots, m\_{\mathbf{u}} - m\_{\mathbf{l}}\} \\ & \Leftrightarrow \begin{cases} & \mathbf{w} - m\_{\mathbf{l}} \in \{0, \dots, o\} \wedge \\ & o + \mathbf{w} - m\_{\mathbf{u}} \in \{0, \dots, o\} \end{cases} \end{aligned} \tag{6.37}$$

$$\Leftrightarrow \begin{cases} \exists \; w\_0', \dots, w\_{\eta-1}' \in \{0, \dots, q-1\} \; : \; \; \omega - m\_{\mathrm{l}} = \sum\_{j=0}^{\eta-1} w\_j' q^j \\\\ \exists \; w\_0'', \dots, w\_{\eta-1}'' \in \{0, \dots, q-1\} \; : \; \; \bullet + \mathsf{w} - m\_{\mathrm{l}} = \sum\_{j=0}^{\eta-1} w\_j'' q^j \end{cases} \tag{6.37}$$

Unfortunately, the final two lines of equation (6.37) cannot be directly proven in zero-knowledge. For our particular instantiations of the building blocks commitments to ℤ -elements are unveiled to the *gp*-mapping of the committed value. This implies that equation (6.37) has to be projected by *gp* as well. We denote the first max -powers of <sup>1</sup> by

$$Q\_j := \mathbf{g}\_1^{\{q^j\}} \qquad \text{ for } j = 0, \ldots, \eta\_{\max} - 1. \tag{6.38}$$

These constants are an *gp*-mapping of all relevant magnitudes of the positional digit system. For an *gp*-mapped secret ∈ <sup>1</sup> the prover shows

$$\exists \; \mathbf{w}'\_0, \dots, \mathbf{w}'\_{\eta-1} \in \mathbb{Z}\_p \; : \; W^{-1} \prod\_{j=0}^{\eta-1} Q\_j^{\mathbf{w}'\_j} = \mathbf{g}\_1^{-m\_1} \tag{6.39}$$

$$: \mathbb{Z} \,\,\mathbb{W}\_0^{\prime\prime}, \dots, \mathbb{W}\_{\eta-1}^{\prime\prime} \in \mathbb{Z}\_p \,:\, \mathcal{W}^{-1} \prod\_{j=0}^{\eta-1} Q\_j^{\mathbb{W}^{\eta}\_j} = \mathfrak{g}\_1^{o-m\_u} \tag{6.40}$$

These are MSEs (cp. Section 6.2.1) and therefore perfectly fit into the Groth-Sahai proof system. Please remember, that besides Eqs. (6.39) and (6.40) the prover also has to show that ′ 0 , … , ′ −1, ″ 0 , … , ″ −1 are valid digits in the range {0, … , }. Hence, the ZK-proof is additionally increased by 2 verifications of signatures that have been published by the verifier.

#### **Final Remarks with Respect to the Implementation**

Firstly, the efficiency of range proofs heavily depends on the representation of the elements with individual digits and then proofing statements about the digits in zero-knowledge. The design parameters and max are a trade-off between the number of signatures and the size of the NIZK statement. Please note, that the signatures can be pre-computed and re-used for all NIZKs. Hence, a larger and a smaller max is usually beneficial.

Secondly, due to rounding errors in max ≔ ⌊log ⌋ there is a "margin" of ℤ -elements {max + 1, … , − 1} with max ≔ max − 1 that cannot be represented by the positional number system. These ℤ -elements encode "illegal" integers.³ Please note, that setting max (and max,

³ In practical terms this means that only a subset of ℤ can be used and "illegal" integers have to be avoided by the application. We claim, this does not pose a problem in practice. For a usual security level of 80 bit, the

resp.) to a larger value would foil the uniqueness of the representation = ∑ max−1 =0 due to overflow issues. This would thwart the soundness of the range proof.

Thirdly, whenever a constant appears in a formula the party can compute ( ) 1 by itself. In practice, it might be beneficial to pre-compute and include them into the CRS such that they can be looked up quickly when needed.

group order is a prime in the magnitude of 2 <sup>254</sup>. If the base = 16 is used (as we do in our implementation), this yields max ≔ ⌊log<sup>16</sup> 2 <sup>254</sup>⌋ = 63 and max ≔ 16<sup>63</sup> = 2252. In other words, "only" integers between 0 and 2 252 can be represented, while integers between 2 <sup>252</sup> and 2 254 are "illegal". For any naturally appearing balances/ prices the restricted space of representable elements is far more than sufficient. Although cleartext balances/ prices are restricted to a smaller domain, this does not weaken security as randomness and therefore ciphertexts/ commitments are still varying over the whole group.

## **7 System Instantiation**

In this chapter we describe and define a real protocol P5C that realizes our anonymous point collection scheme Fapc. We say

**Definition 7.1 (Provably-Secure yet Practical Privacy-Preserving Point Collection Scheme)** *A protocol* P5C *is called a* Provably-Secure yet Practical Privacy-Preserving Point Collection *scheme (P5C), if it UC-realizes* Fapc*.*

The proof that P5C is a UC-realization of Fapc is given in Chapter 8.

The style of the presentation follows the same structure as the presentation of the ideal model Fapc in Chapter 4. First, we start to describe what information is stored locally be each party in Section 7.1 and then present a realization for each of the tasks in Sections 7.2 to 7.4. Again, an overview of all used variables can be found in the appendix for quick reference.

Although P5C is a single, monolithic protocol, the individual tasks are presented as if they were individual protocols. For typographic reasons we split their presentation into a *wrapper protocol* and a *core protocol*. Except for a few cases, there is a one-to-one correspondence between wrapper and core protocols. The wrapper protocols have the same input/output interfaces as their ideal counterparts and describe steps that are executed by each party locally before and after the respective core protocol. The wrapper protocols pre-process the inputs, parse the previously stored state from local memory which also includes to load individual keys, post-process the output after the core protocol has returned, persist the new state, and interact with other UC functionalities. The core protocols describe the actual interaction between parties and what messages are exchanged. This dichotomy between wrapper and core protocols is lifted in the following cases:


(3) The tasks DetectDS, VerifyGuilt and ProveParticipation (cf. Figs. 7.23, 7.24 and 7.29) are not split, because they are so short that doing so would run contrary to a concise presentation.

The realization P5C lives in the (Fmsg,Fbb,FCRS)-hybrid model. Fmsg is used for messaging between parties, Fbb is used to publish public keys for the parties, and FCRS is a trustworthy source for a common reference string. We refer the reader to Sections 3.4 and 3.5 for more details. In the following, the wrapper protocol for each task typically interacts with these setup functionalities and passes/accepts required information to/from the core protocol. The core protocols have no knowledge about the setup functionalities, but they implicitly use the messaging capabilities of Fmsg which has appropriately been initialized by surrounding wrapper protocols in advance.

## **7.1 The Local State of the Parties**

While in the ideal model all information is kept in a single, pervasive, trustworthy database *TRDB*, no such database exists in the real model. Instead, the state of the system is distributed across all parties. Figure 7.1 depicts what is stored by which party. After a description of each party's local state in Sections 7.1.1 to 7.1.3, the instantiation of the tags (cf. Section 4.1.2) is detailed out in Section 7.1.4. Although these tags are not a direct part of a party's local state, they are passed between parties for synchronization and thus contributes to the local state.

#### **7.1.1 Local State of a User**

We start with a description of the user's state because the *wallet* is the central concept of our P5C scheme. A wallet is created during IssueWallet locally stored by the user and subsequently updated. If the inner components of a wallet are understood, the rest follows naturally. A wallet is of the form

$$\tau := (\mathbf{s}, \boldsymbol{\varphi}, \mathbf{x}^{\text{next}}, \boldsymbol{\lambda}, a\_{\text{Il}}, c\_{\text{upd}}, d\_{\text{upd}}, \sigma\_{\text{upd}}, \text{cert}\_{\mathcal{P}}, c\_{\text{fix}}, d\_{\text{fix}}, \sigma\_{\text{fix}}, b, u\_1^{\text{next}}). \tag{7.1}$$

Some of the components are fixed after issuing, some change after every transaction. The most essential elements are two signed commitments fix, upd with fix binding the fixed part of a wallet and upd binding the updatable part.

The fixed part consists of the *wallet ID* , the *user attributes* U, the *fixed commitment* fix, its corresponding opening fix and a signature fix which created by the operator when the wallet

### **UC-Protocol** P5C

#### *I. Local State*

	- A public and private key (*pk*<sup>O</sup> ,*sk*O).
	- A self-signed certificate *cert*O.
	- A partial, set-valued and pairwise disjoint mapping *bl* assigning a set of blacklisted fraud-detection IDs to a blacklisting tag:

$$f\_{bl\_{\Phi\_{\lambda}}} : \Omega\_{\text{bl}} \to \wp(\Phi), \omega\_{\text{bl}} \mapsto bl\_{\Phi\_{\lambda}}$$

	- A public and private key (*pk*<sup>P</sup> ,*sk*P).
	- A certificate *cert*<sup>P</sup> signed by the operator.
	- A public and private key (*pk*<sup>U</sup> ,*sk*U).
	- A set { } of the most recent states of all personal wallets.
	- A mapping pp assigning the hidden part of a prove-participation tag pp to a prove-participation tag pp:

$$f\_{\rm pp} \,:\,\mathcal{Q}\_{\rm pp} \to \Psi\_{\rm pp}, \omega\_{\rm pp} \mapsto \psi\_{\rm pp}.$$

(4) The dispute resolver internally records a public and private key (*pkDR*,*skDR*).

*II. Behavior*


Figure 7.1: The Protocol P5C – Local State of Parties and Overview of Tasks

is issued. The fixed commitment fix = Commit(,*sk*U)¹ pins users down to the wallet ID and their secret key *sk*U. The signature fix <sup>←</sup> SIG.Sign(*sk*fix <sup>O</sup> , (fix, U)) is initially created by the operator, ties together fix and <sup>U</sup> and also gives testimony that the wallet is valid.

The updatable part consists of the *serial number* , the *fraud-detection ID* for the current transaction, the transaction counter next for the next interaction, the *updatable commitment* upd, its corresponding opening upd, a signature upd which created by a PoS, a *PoS certificate cert*P, the *balance* , and the *double-spending mask* next 1 for the next transaction. The updatable commitment upd = Commit(, , next 1 , next) binds together the wallet ID , the balance , some user-chosen mask <sup>1</sup> to generate consistent double-spending tags in the next transaction and the future transaction counter next. The wallet ID is contained in the updatable commitment in order to link it with the fixed commitment. The signature upd <sup>←</sup> SIG.Sign(*sk*upd O , (upd, )) ties upd to a serial number and is re-created by a PoS in every transaction. It is valid under the PoS' public key which is deposited in *cert*<sup>P</sup> (cf. Section 7.1.2).

Note, that the fraud-detection ID itself is not contained in the updatable commitment as it is determined by (, ) and thus is pinned down indirectly. The wallet ID serves a PRF key and the fraud-detection ID of the current transaction is calculated as ≔ PRF(, next − 1). This choice of the fraud-detection ID has the advantage that the different states of a wallet are untraceable as long as remains secret, but becomes traceable if is unveiled.

The remaining information which is stored by a user is rather simple (cp. Fig. 7.1). A user stores a public-secret key pair (*pk*<sup>U</sup> ,*sk*U) for the purpose of identification. Additionally, a user locally manages a lookup table pp which associates a hidden counterpart pp to each proveparticipation tags pp that has been created in the scope of Disburse (cp. Section 7.3.3). For details on this not yet introduced hidden complement pp see Section 7.1.4. Users have to look up the hidden counterpart pp when they are confronted with one of their previous proveparticipation tag pp in the scope of ProveParticipation (cp. Section 7.4.4) again.

#### **7.1.2 Local State of a Point-of-Sale**

The local state of a PoS is a subset of what the operator stores and therefore described first. A PoS stores a public-private key pair (*pk*<sup>P</sup> ,*sk*P) and a certificate *cert*P. The key pair is of the form

$$\mathfrak{p}k\_{\mathcal{P}} \coloneqq (\mathfrak{p}k\_{\mathcal{P}}^{\mathrm{upd}}, \mathfrak{p}k\_{\mathcal{P}}^{\mathrm{rc}}, \mathfrak{p}k\_{\mathcal{P}}^{\mathrm{pp}}) \qquad \qquad \mathrm{sk}\_{\mathcal{P}} \coloneqq (\mathrm{sk}\_{\mathcal{P}}^{\mathrm{upd}}, \mathrm{sk}\_{\mathcal{P}}^{\mathrm{rc}}, \mathrm{sk}\_{\mathcal{P}}^{\mathrm{pp}}) \tag{7.2}$$

¹ By abuse of notation, we sometimes ignore the opening or decommitment value fix which is also an output of Commit(⋅).

and consists of three key pairs of an EUF-CMA secure signature scheme: (*pk*upd P ,*sk*upd P ) to sign the updatable part of a wallet, (*pk*rc P ,*sk*rc <sup>P</sup> ) to sign recalculation tags and (*pk*pp P ,*sk*pp P ) to sign prove-participation tags. The certificate is of the form

$$\text{cert}\_{\mathcal{P}} := (\mathcal{p}k\_{\mathcal{P}}, a\_{\mathcal{P}}, \sigma\_{\mathcal{P}}^{\text{cert}}) \tag{7.3}$$

and consists of a signature cert <sup>P</sup> which is issued by the operator on the PoS' combined public key *pk*<sup>P</sup> and its attributes P. A PoS obtains its certificate in the scope of CertifyPOS (cp. Section 7.2.3).

#### **7.1.3 Local State of the Operator**

The local state of the operator is a superset of what a PoS stores, because the operator can also act as a PoS. Likewise, the operator stores a public-private key pair (*pk*<sup>P</sup> ,*sk*P) and a self-signed certificate *cert*O. The key pair is of the form

$$\text{p}k\_O \coloneqq (\text{pk}\_O^{\text{fix}}, \text{pk}\_O^{\text{cert}}, \text{pk}\_O^{\text{upd}}, \text{pk}\_O^{\text{rc,sig}}, \text{pk}\_O^{\text{rc,enc}}) \quad \text{sk}\_O \coloneqq (\text{sk}\_O^{\text{fix}}, \text{sk}\_O^{\text{cert}}, \text{sk}\_O^{\text{upd}}, \text{sk}\_O^{\text{rc,sig}}, \text{sk}\_O^{\text{rc,enc}}) \tag{7.4}$$

and the signature key pairs (*pk*upd O ,*sk*upd O ) and (*pk*rc,sig O ,*sk*rc,sig O ) serve the same purpose as in Section 7.1.2. On top, the signature key pair (*pk*fix O ,*sk*fix <sup>O</sup> ) is used to sign the fixed part of a wallet, the signature key pair (*pk*cert O ,*sk*cert <sup>O</sup> ) is used to issue certificates for PoSes and the encryption key pair (*pk*rc,enc O ,*sk*rc,enc O ) is used to collect encrypted recalculation tags from the PoSes.

The map *bl* manages pairwise disjoint sets of fraud-detection IDs of blacklisted wallets. After a successful execution of IssueWallet the secret wallet ID is fixed. This also implies that the set {,} of fraud-detection IDs which are used by the particular wallet are pre-determined but is of course unknown to the operator. Remember, the wallet ID serves as PRF seed (cp. Section 7.1.1). At the end of IssueWallet (cp. Section 4.3.1) the operator obtains a blacklisting tag bl which allows the operator to recover the set {,}. The map *bl* can be in one out of three possible states per bl.


For an arbitrary, but fixed bl the map *bl* transits from state (1) to (2) at the end of IssueWallet (cp. Section 7.3.1) and from (2) to (3) at the end of BlacklistWallet (cp. Section 7.4.2). Note that *bl* is pairwise disjoint only with overwhelming probability. Each of the sets *bl* = {PRF(, 0), … , PRF(, bound)} has finite size bound + 1 and there are only polynomially many of them with uniformly drawn while the image of PRF is exponentially large.

#### **7.1.4 Instantiation of Tags**

As detailed out in Section 4.1.2, the main tasks IssueWallet, Disburse and Deposit generate various sorts of tags, namely blacklisting tags bl, double-spending tags ds, recalculation tags rc and prove-participation tags pp. These tags are used to periodically synchronize the local state of the parties and each sort of tags supports one of the utility tasks (cp. Section 4.1.2).

As the tags are not specific for a single task but link different tasks, this section gives an integrated explanation. As a common characteristic, three of these tags come as pairs with a hidden counterpart bl, rc and pp, resp.², which are not described in Section 4.1.2 as their implementation is specific to the realization P5C. The tags are output to the environment, passed around and thus part of the public interface, while their hidden counterparts are kept secret from the environment.

#### **Blacklisting Tags**

A blacklisting tag bl is output to the operator at the end of IssueWallet and allows with the consent of the dispute resolver to recover the sequence of all fraud-detection IDs *bl* = ((,))∈{0,…,bound} which are used by the wallet with the (secret) wallet ID . Further remember, that the wallet ID does not only uniquely identifies the wallet, but also serves as the seed for the Dodis-Yampolskiy (cp. Section 6.2.6 and [DY04]) PRF and determines the fraud-detection IDs by , ≔ PRF(, ).

At first sight, the blacklisting feature could be implemented as a simple form of key-escrow mechanism. Ideally, the hidden blacklisting tag bl ≔ would be set equal to the wallet ID and the user would encrypt bl under the public key *pkDR* of the dispute resolver as bl ← Enc(*pkDR*, bl) which is then sent to the operator for later use. However, two issues need to be considered:

(1) The wallet ID is jointly chosen by the user and the operator by a Blum Cointoss and thus consists of two shares ′ , ″ (cp. IssueWallet in Section 7.3.1). If only the user chose it, an adversary could tamper with recalculations and blacklisting, as well as with doublespending detection (e.g., by re-using the same wallet ID for different wallets).

² Conceptionally, there is also a hidden part for ds, namely the DS mask <sup>1</sup> (see later), but there is no need to store it and hence no separate symbol has been introduced although this causes a lack of symmetry.

(2) The wallet ID is part of the fixed commitment fix of the wallet (cp. Section 7.1.1).

Hence, the user has to prove to the operator that the encrypted value in bl and the committed value in fix are equal and consistent to the Blum Cointoss. For practical reasons, we use Groth-Sahai NIZK proofs (cp. Section 6.2.1 and [GS08]) and structure-preserving, shrinking commitments (cp. Section 6.2.2 and [Abe+15]). In order to not quash practical efficiency due to a generic Cook reduction, we would need an encryption scheme whose message space equals the key space of the PRF (i.e., ℤ ) and which is compatible to the GS-NIZK proof system (i.e., is algebraic). Unfortunately, we are unaware of such an encryption scheme.³

Instead, we use a variant of a CCA-secure structure-preserving encryption scheme for vectors of <sup>1</sup> -elements which we adopted to our algebraic setting (cp. Section 6.2.4 and [Cam+11]). This makes it impossible to directly decrypt the original wallet ID ∈ ℤ and to recover from 1 due to the hardness of the DLOG problem in <sup>1</sup> . Therefore, we apply the following workaround. Users split their share ′ into small chunks ′ 0 , … , ′ ℓ−1 ∈ {0, … , − 1} such that ′ = ∑ ℓ−1 =0 ′ ⋅ for some base . The base is chosen in a way that it is feasible for the dispute resolver to recover ′ from ′ 1 by brute-force in a reasonable amount of time (e.g., = 232). The user creates the hidden blacklisting tag as

$$\psi\_{\rm bl} \leftarrow \text{ENC1.Enc}\{\rho k\_{DR}, (\Lambda\_0', \dots, \Lambda\_{\ell-1}', \Lambda'', \rho k\_{\mathcal{H}})\} \qquad \text{with} \qquad \Lambda\_{\rm l}' := \text{g}\_1^{\Lambda\_{\rm l}'} \tag{7.5}$$

and the operator complements it to the blacklisting tag as

$$
\omega\_{\rm bl} := (\lambda'', \psi\_{\rm bl}) \tag{7.6}
$$

The CCA-secure ciphertext bl includes the user's key *pk*<sup>U</sup> to rule out malleability attacks. Otherwise, a malicious operator could potentially trick the dispute resolver into recovering the trapdoor for a different (innocent) user.

#### **Double-Spending Tags**

Our double-spending detection mechanism utilizes a well-known technique from the (offline) e-cash literature. The secret user key is hidden in the slope of a line in the plane. At every transaction, the operator challenges the user for a point on the line. The operator picks the DS challenge <sup>2</sup> and the user replies with ≔ <sup>2</sup> ⋅ *sk*<sup>U</sup> + <sup>1</sup> mod . The concrete line is masked by the DS mask <sup>1</sup> which encodes the line's ordinate and is secretly chosen by the user.

³ Note that Paillier encryption works in a different algebraic setting and cannot easily be combined with Groth-Sahai proofs.

As long as each state of a wallet is only used once, each time a different DS mask <sup>1</sup> and thus a different line is used and no information about *sk*<sup>U</sup> is unveiled. In case of a double-spending, the same DS mask is used, the operator learns two points (<sup>2</sup> , ), (′ 2 , ′ ) on the same line and can restore the user's secret key via *sk*<sup>U</sup> ≔ ( − ′ )/(<sup>2</sup> − ′ 2 ) mod . In order to force the user to use the same DS mask <sup>1</sup> in case of a double-spending, the DS mask for the next transaction is fixed as next 1 in the previous transaction and put into the updatable commitment upd of the wallet (cf. Section 7.1.1).

The double-spending tag has the form

$$
\omega\_{\rm ds} := (\varphi, t, u\_2) \tag{7.7}
$$

and also includes the fraud-detection ID to identify matching double-spending tags which have been created for the same wallet state. Moreover, the secret user key does not only allow to lookup the user's PID for the public key *pk*<sup>U</sup> ≔ *sk*<sup>U</sup> 1 , but also serves as a proof of guilt ≔ *sk*<sup>U</sup> due to the hardness of the DLOG in <sup>1</sup> .

#### **Recalculation Tags**

The tasks Deposit and Disburse output recalculation tags that allow the operator to recalculate the true balance of a wallet given that the operator has recovered the blacklist *bl* = ((,))∈{0,…,bound} of fraud-detection IDs of the wallet before. The recalculation tag rc and its hidden complement rc are simply constructed as

$$\psi\_{\rm rc} := (\mathbf{s}, \boldsymbol{\varrho}, \boldsymbol{\varrho}, \boldsymbol{\varrho} k\_{\boldsymbol{\varrho}\prime}^{\rm rc}, \sigma\_{\rm rc}) \tag{7.8}$$

$$
\omega\_{\rm rc} \leftarrow \mathsf{ENC2.Enc}(\rho k\_{\mathcal{O}}^{\rm rc,enc}, \psi\_{\rm rc}) \tag{7.9}
$$

The fraud-detection ID and price are needed for the obvious reason that the operator needs to match with the set *bl* and to recalculate as balance as the sum of all prices. The serial number is included to enforce uniqueness of the tags for formal reasons, if all other attributes are equal. This might happen if the same user commits double-spending at the same PoS and also obtains the same price. The signature rc and the encryption realize an authenticated and confidential channel despite the fact that the framing protocol, i.e. the environment, is in charge to transport recalculation tags from the PoSes to the operator. The signature on the triple (, , ) under the secret key *sk*rc <sup>P</sup> of the PoS rules out that the environment can inject fake recalculation tags in the name of an honest PoS. The encryption is required, because the price might infringe upon a user's privacy and is not leaked by the ideal model as long as the user, the involved PoS and the operator are honest.

#### **Prove-Participation Tags**

At the end of the task Deposit a prove-participation tag pp is output to the user and the PoS which allows the user to prove to have participated in this transaction. The recalculation tag pp and its hidden complement pp are constructed as

$$\psi\_{\rm pp} := (\rho k\_{\rm p}^{\rm pp}, \sigma\_{\rm pp}, d\_{\rm pk\_{\rm q}}) \tag{7.10}$$

$$
\omega\_{\rm pp} := \mathfrak{c}\_{\rm pk\_{\rm \mathcal{U}}} \tag{7.11}
$$

with (*pk*<sup>U</sup> , *pk*<sup>U</sup> ) ← C2.Commit(*crs* (2) com,*sk*U) being a commitment on the user's key and pp ← SIG.Sign(*sk*pp P , *pk*<sup>U</sup> ) a signature on the commitment that is valid under the public key of PoS which took part in the transaction. The principle idea is that the hiding property of the commitment, i.e. the "public" prove-participation tag pp asserts anonymity. But when the suspected user is confronted with pp again and summoned to prove its participation with a particular PoS, only the legitimate owner of pp who has securely stored pp can unveil to the correct identity during the task ProveParticipation.

## **7.2 Setup Tasks**

As in the system definition (cp. Section 4.2) all parties need to register themselves with a public key before they can participate in the system and PoSes needs to certified. In addition, the whole system needs to be setup once. The latter has no counterpart in the ideal model and is realized through the setup functionality FCRS.

#### **7.2.1 System Setup**

To setup the system once (see Fig. 7.2), the public parameter *crs* must be generated in a trustworthy way. The CRS *crs* consists of a description of the underlying algebraic framework *gp*, a splitting base and the individual CRSes for the cryptographic building blocks. We assume that the CRS is implicitly available to all protocols and algorithms by means of FCRS.

#### **7.2.2 Registrations**

The tasks RegisterDR (cp. Figs. 7.3 and 7.4), RegisterOp (cp. Figs. 7.5 and 7.6), RegisterPOS (cp. Figs. 7.7 and 7.8) and RegisterUser (cp. Figs. 7.9 and 7.10) are realized in the obvious way. The respective core protocols run the key generation algorithms of the underlying building blocks and return a combined public-private key pair. After that the wrapper protocols register the public key at the bulletin board Fbb.

DR output:(registered)

Figure 7.3: The Protocol P5C (cont. from Fig. 7.1) – Task RegisterDR


Figure 7.4: The Core Protocol for Task RegisterDR (used by Fig. 7.3)

### **UC-Protocol** P5C **(cont.) – Task** RegisterOp

Operator input:(register, O)


Operator output:(registered)

Figure 7.5: The Protocol P5C (cont. from Fig. 7.1) – Task RegisterOp

RegisterOp(*crs*, O) parse (*gp*, ,*crs*(1) com,*crs*(2) com,*crs*(3) com,*crs*(4) com,*crs*pok) ≔ *crs* (*pk*fix O ,*sk*fix <sup>O</sup> ) ← SIG.Gen(*gp*) (*pk*cert O ,*sk*cert <sup>O</sup> ) ← SIG.Gen(*gp*) (*pk*upd O ,*sk*upd O ) ← SIG.Gen(*gp*) (*pk*rc,sig O ,*sk*rc,sig O ) ← SIG.Gen(*gp*) (*pk*rc,enc O ,*sk*rc,enc <sup>O</sup> ) ← ENC2.Gen(*gp*) (*pk*<sup>O</sup> ,*sk*O) ≔ ((*pk*fix O , *pk*cert O , *pk*upd O , *pk*rc,sig O , *pk*rc,enc O ), (*sk*fix <sup>O</sup> ,*sk*cert <sup>O</sup> ,*sk*upd O ,*sk*rc,sig O ,*sk*rc,enc <sup>O</sup> )) cert <sup>O</sup> ← SIG.Sign(*sk*cert <sup>O</sup> , (*pk*upd O , O)) *cert*<sup>O</sup> ≔ (*pk*upd O , O, cert O ) return (*pk*<sup>O</sup> ,*sk*O,*cert*O)

Figure 7.6: The Core Protocol for Task RegisterOp (used by Fig. 7.5)

### **UC-Protocol** P5C **(cont.) – Task** RegisterPOS

PoS input:(register)


PoS output:(registered)

Figure 7.7: The Protocol P5C (cont. from Fig. 7.1) – Task RegisterPOS

RegisterPOS(*crs*) parse (*gp*, ,*crs*(1) com,*crs*(2) com,*crs*(3) com,*crs*(4) com,*crs*pok) ≔ *crs* (*pk*upd P ,*sk*upd P ) ← SIG.Gen(*gp*) (*pk*rc P ,*sk*rc <sup>P</sup> ) ← SIG.Gen(*gp*) (*pk*pp P ,*sk*pp P ) ← SIG.Gen(*gp*) (*pk*<sup>P</sup> ,*sk*P) ≔ ((*pk*upd P , *pk*rc P , *pk*pp P ), (*sk*upd P ,*sk*rc <sup>P</sup> ,*sk*pp P )) return (*pk*<sup>P</sup> ,*sk*P)

Figure 7.8: The Core Protocol for Task RegisterPOS (used by Fig. 7.7)

User input:(register)


User output:(registered)

Figure 7.9: The Protocol P5C (cont. from Fig. 7.1) – Task RegisterUser


Figure 7.10: The Core Protocol for Task RegisterUser (used by Fig. 7.9)

The keys of the operator, a PoS and a user are described in Sections 7.1.1 to 7.1.3. The DR generates a key pair (*pkDR*,*skDR*) for an IND-CCA secure encryption scheme. The key *pkDR* is used to deposit the secret wallet ID and PRF key in encrypted form in the wallet-specific blacklisting tag bl which allows to link this wallet's transactions in case of a dispute.

#### **7.2.3 Point-of-Sale Certification**

The task CertifyPOS (cp. Figs. 7.11 and 7.12) is executed between a PoS and the operator when a new PoS is deployed into the field. At the end of the task the PoS has obtained its certificate which is locally stored. It contains the PoS' public key *pk*<sup>P</sup> , its attributes <sup>P</sup> (which are chosen by the operator), and a signature on both, generated by the operator using *sk*cert <sup>O</sup> .

Remember that it is advisable to encode some sort of limited time of validity into <sup>P</sup> to mitigate the impact of stolen or otherwise compromised PoS which may be unattendedly placed in the field (cp. Section 2.4). This implies that CertifyPOS has to be run repeatedly to refresh *cert*<sup>P</sup> from time to time (cp. Section 4.2.2).

## **7.3 Main Tasks**

This section describes the realization of main tasks IssueWallet, Deposit and Disburse. Although the tasks are presented in that order, the individual steps and messages of each task are not described in temporal order, but specific elements are explained in a semantic context across messages. We refer the reader to the figures for a temporal order of messages. Also, the principle structure of a wallet (cp. Section 7.1.1) and the tags should be known (cp. Section 7.1.4).

#### **7.3.1 Wallet Issuing**

This task IssueWallet (cp. Figs. 7.13 to 7.15) is executed between a user and the operator to create a new wallet with a fresh wallet ID and balance 0. It fulfills four objectives:


For the first objective, both parties randomly choose shares of the serial number ′ ∈ <sup>1</sup> and ″ ∈ <sup>1</sup> , resp., which together form the serial number ≔ ′ ⋅ ″. To this end, the parties engage in a standard Blum coin toss in the messages 2–4.

Similarly, the parties run half of a Blum coin toss for the second objective in the messages 1 and 2. As the user starts and the coin toss is prematurely stopped after the second message, the wallet ID ≔ ′ + ″ ∈ ℤ is fixed and known by the user, but remains secret to the operator.

After the wallet ID has been pinned down, the user prepares the escrow of in the blacklisting tag for the third objective. The user deposits the split chunks {′ } ∈{0,…,ℓ−1} of the user's share ′ in encrypted form in the hidden blacklisting tag bl, sends it to the operator in message 3, the operator augments it by the operator's share ″ to form the complete blacklisting

### **UC-Protocol** P5C **(cont.) – Task** CertifyPOS

PoS input:(certify\_pos)

	- (a) Load the internally recorded (*pk*<sup>P</sup> ,*sk*P). ⊥
	- (b) Receive *pk*<sup>O</sup> from the bulletin-board Fbb for PID *pid*<sup>O</sup> . ⊥
	- (c) Call Fmsg with (establish-session, ident, *pid*<sup>O</sup> , certify\_pos).
	- (a) Load the internally recorded (*pk*<sup>O</sup> ,*sk*O). ⊥
	- (b) Receive *pk*<sup>P</sup> from the bulletin-board Fbb for PID *pid*<sup>P</sup> . ⊥

Operator output: (certifying\_pos, *pid*<sup>P</sup> ) Operator input:(certifying\_pos, P)


((*cert*P),(OK)) ← CertifyPOS ⟨P(*pk*<sup>O</sup> , *pk*<sup>P</sup> ), O(*pk*<sup>O</sup> ,*sk*O, *pk*<sup>P</sup> , P)⟩ .

	- (a) Parse <sup>P</sup> from *cert*P.
	- (b) Record *cert*<sup>P</sup> internally.
	- (c) Call Fmsg with (close,*ssid*).

(7) At the operator side: Receive (closed,*ssid*) from Fmsg.

PoS output:(certified\_pos, P)

Operator output:(certified\_pos)

⊥ If this does not exist, abort.

Figure 7.11: The Protocol P5C (cont. from Fig. 7.1) – Task CertifyPOS

P(*pk*<sup>O</sup> , *pk*<sup>P</sup> ) O(*pk*<sup>O</sup> ,*sk*O, *pk*<sup>P</sup> , P) parse (*pk*fix O , *pk*cert O , *pk*upd O , *pk*rc,sig O , *pk*rc,enc O ) ≔ *pk*<sup>O</sup> parse (*pk*fix O , *pk*cert O , *pk*upd O , *pk*rc,sig O , *pk*rc,enc O ) ≔ *pk*<sup>O</sup> parse (*sk*fix <sup>O</sup> ,*sk*cert <sup>O</sup> ,*sk*upd O ,*sk*rc,sig O ,*sk*rc,enc <sup>O</sup> ) ≔ *sk*<sup>O</sup> cert <sup>P</sup> ← SIG.Sign(*sk*cert <sup>O</sup> , (*pk*<sup>P</sup> , P)) *cert*<sup>P</sup> ≔ (*pk*<sup>P</sup> , P, cert P ) *cert*<sup>P</sup> parse (*pk*′ P , P, cert P ) ≔ *cert*<sup>P</sup> if SIG.Vfy(*pk*cert O , cert P , (*pk*<sup>P</sup> , P)) = 0 return ⊥ return (*cert*P) return (OK)

Figure 7.12: The Core Protocol for Task CertifyPOS (used by Fig. 7.11)

tag bl and locally stores bl ↦ ∅ in *bl* to mark the blacklisting tag as legitimately issued and not blacklisted. For a detailed explanation on bl, bl see Section 7.1.4.

For the last objective, the user generates the fixed and updatable commitment fix, upd, resp., which are then signed by the operator (see messages 3–4). See Section 7.1.1 for a detailed description of the structure of a wallet. In order to show that these commitments are constructed correctly, the user uses P1 to compute a proof for a statement *stmnt* from the language (1) *gp* defined by

$$L\_{gp}^{\triangleleft} := \begin{cases} \begin{cases} \begin{array}{c} \mathsf{2}\lambda,\lambda',d\_{0}^{\triangleleft},\ldots,d\_{\ell-1}^{\triangleleft},r\_{1},r\_{2}\in\mathbb{Z}\_{p};\\ \lambda,\lambda',d\_{0}^{\triangleleft},\ldots,d\_{\ell-1}^{\triangleleft},l\_{\text{firr}}^{\triangleleft},d\_{\text{firr}}^{\triangleleft},d\_{\text{firr}}^{\triangleleft},d\_{\text{firr}}^{\triangleleft}\in G\_{1};\\ \lambda,\lambda',d\_{0}^{\triangleleft},\ldots,d\_{\ell-1}^{\triangleleft},l\_{\text{firr}}^{\triangleleft},d\_{\text{firr}}^{\triangleleft},d\_{\text{firr}}^{\triangleleft}=1\\ \mathsf{P}k\_{DR}^{\triangleleft}\text{ }c\_{\text{firr}}^{\triangleleft}(\text{,}\lambda,l\_{1}^{\triangleleft}\text{,}^{\triangleleft},g\_{1}),c\_{\text{firr}}^{\triangleleft},d\_{\text{firr}}^{\triangleleft}\right)=1\\ \mathsf{c}\_{\text{firr}} \\ \text{c}\_{\text{firr}} \\ \text{c}\_{\text{firr}} \\ \text{c}\_{\text{vrir}} \\ \lambda' = \lambda'+\lambda''\\ \text{a}\_{\text{vrir}} \\ \lambda' = \sum\_{l=0}^{\triangleleft}\lambda'+\lambda''\\ \text{a}\_{\text{vrir}} \\ \lambda'' \end{cases} \end{cases} \tag{7.12}$$

This proof system also asserts that the hidden blacklisting tag bl has been created correctly.

### **UC-Protocol** P5C **(cont.) – Task** IssueWallet

User input:(issue\_wallet)

#### (1) At the user side:

	- (a) Load the internally recorded (*pk*<sup>O</sup> ,*sk*O). ⊥
	- (b) Load the internally recorded *cert*O. ⊥
	- (c) Receive *pkDR* from the bulletin-board Fbb for PID *pidDR*. ⊥
	- (d) Receive *pk*<sup>U</sup> from the bulletin-board Fbb for PID *pid*<sup>U</sup> . ⊥

Operator output: (issuing\_wallet, *pid*<sup>U</sup> ) Operator input:(issuing\_wallet, U)


((), (, bl)) ← IssueWallet ⟨U(*pkDR*, *pk*<sup>U</sup> ,*sk*U), O(*pkDR*,*sk*O, *pk*<sup>U</sup> , U,*cert*O)⟩ .

(6) At the user side:


User output: (issued\_wallet, , U) Operator output:(issued\_wallet, , bl)

⊥ If this does not exist, abort.

Figure 7.13: The Protocol P5C (cont. from Fig. 7.1) – Task IssueWallet


Figure 7.14: The Core Protocol for Task IssueWallet (used by Fig. 7.13)

```

                                      ′
                                      , bl, fix, upd, 
                                                         ≔ ′
                                                              ⋅ ″
                                                        stmnt ≔ (pkU
                                                                     , pkDR, bl,
                                                             fix, upd, ′
                                                                     wid, ″, ″)
                                                        if P1.Vfy(crspok,stmnt,  ) = 0
                                                             return ⊥
                                                        fix ← SIG.Sign(skfix
                                                                         O , (fix, U))
                                                        upd ← SIG.Sign(skupd
                                                                         O
                                                                            , (upd, ))

                                      ″, ″
                                         ser, fix, upd
if C4.Open(crs(4)
             com, ″, ″
                   ser, ″
                       ser) = 0
    return ⊥
 ≔ ′
     ⋅ ″
 ≔ (, PRF(, 0), 1, , U, upd, upd, upd,
    certO, fix, fix, fix, 0, next
                      1
                         ) bl ≔ (″, bl)
return ( ) return (, bl)
```
### **7.3.2 Deposition**

The task Deposit (Figs. 7.16 to 7.20) is executed between an anonymous user and a PoS to deposit points on a wallet owned by the user. It serves the following objectives:


As in Section 7.3.1 for IssueWallet the first objective is implemented by a standard Blum cointoss in the message 1–3.

To achieve the second objective, the task is interactive. After the user has send the attributes U, prev <sup>P</sup> which are required to determine the price in message 2, those are output to the PoS. The PoS restarts the second part of Deposit with the price as input which is then sent back to the user in message 3.

Figure 7.16: The Protocol P5C (cont. from Fig. 7.1) – Task Deposit, Part 1

For the third objective, the homomorphism of the commitment scheme is exploited. Also see Section 7.1.1 for a detailed description of the wallet. The user creates a re-randomized version ′ upd of the previous updatable commitment prev upd . The commitment ′ upd contains the same values as prev upd except for a fresh DS mask next 1 (see next paragraph) and is sent to the PoS in message 2. Re-randomization enables unlinkability. The PoS applies the homomorphic update ″ upd to ′ upd to deposit points on the balance and to increase the transaction counter by 1. The combination of serial number and the modified updatable commitment upd is signed by the PoS with upd and both are sent back to the user in message 3.

To generate the double-spending tag ds for the current transaction, the PoS challenges the user with <sup>2</sup> in the first message. In the second message, the user responds with the DS

Figure 7.17: The Protocol P5C (cont. from Fig. 7.1) – Task Deposit, Part 2

response ≔ *sk*<sup>U</sup> ⋅ <sup>2</sup> + <sup>1</sup> mod and the current fraud-detection ID ≔ PRF(, ) which has been calculated by the user in the beginning. This gives the PoS all information to construct the double-spending tag ds ≔ (, , <sup>2</sup> ). Note, that the currently used DS mask <sup>1</sup> stems from the previous wallet state prev as <sup>1</sup> = prev,next 1 . In preparation for the double-spending mechanism in the upcoming transaction, the user embeds a fresh DS mask next 1 in the re-randomized version ′ upd of the current updatable commitment (see above).

The creation of the recalculation tag rc is straightforward as the PoS has all necessary information at hand and can simply compile it. For details see Section 7.1.4.

For the prove-participation tag pp and its hidden counterpart pp the user creates a commitment (*pk*<sup>U</sup> , *pk*<sup>U</sup> ) ← C2.Commit(*crs* (2) com,*sk*U) on the user's key and sends *pk*<sup>U</sup> to the PoS in the second message. The PoS replies with a corresponding signature pp in message 3. After that the user knows all components and can set the hidden prove-participation tag as pp ≔ (*pk*pp P , pp, *pk*<sup>U</sup> ) and the prove-participation tag as pp ≔ *pk*<sup>U</sup> .

In order to show that everything has been computed honestly, the user sends a proof as part of the second message. In particular, shows that the user knows a signed wallet state


Figure 7.18: The Core Protocol for Task Deposit, Part 1 (used by Fig. 7.16)

 ′ , , , U, prev P , *pk*<sup>U</sup> , ′ upd, *stmnt* ≔ (*pk*fix O , *pk*cert O , , U, prev P , *pk*<sup>U</sup> , ′ upd, , <sup>2</sup> ) if P2.Vfy(*crs*pok,*stmnt*, ) = 0 return ⊥ if ∈ *bl* return blacklisted\_wallet ≔ ′ ⋅ ″ return (OK) return (, U, prev P )

Figure 7.19: The Core Protocol for Task Deposit, Part 1 (cont., used by Fig. 7.16)


Figure 7.20: The Core Protocol for Task Deposit, Part 2 (used by Fig. 7.17)

involving commitments fix and prev upd such that prev upd and ′ upd are commitments on the same messages except for the DS mask, that the (hidden) signature on prev upd verifies under some (hidden) PoS key *pk*prev P certified by the operator, and that , , and *pk*<sup>U</sup> have been computed using the values contained in fix and prev upd . Formally, the language (2) *gp* of the statement *stmnt* for the proof is defined by

 (2) *gp* ≔ ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ *pk*fix O *pk*cert O U prev P *pk*<sup>U</sup> ′ upd 2 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ | | | | | | | | | | | | | | | | | | | | | | | | | | ∃ , ,*sk*U, <sup>1</sup> ∈ ℤ ; prev, prev, , , *pk*<sup>U</sup> , prev, <sup>1</sup> , next 1 , *pk*<sup>U</sup> , prev upd , ′ upd, fix ∈ <sup>1</sup> ; *pk*prev P = (*pk*upd,prev P , *pk*rc,prev P ) ∈ (<sup>3</sup> 1 × <sup>2</sup> ) × (<sup>2</sup> 1 × <sup>3</sup> 2 ) prev upd , fix ∈ <sup>2</sup> ; prev upd , cert,prev P , fix ∈ <sup>2</sup> 2 × <sup>1</sup> ∶ C1.Open(*crs* (1) com, (, *pk*<sup>U</sup> ), fix, fix) = 1 C2.Open(*crs* (2) com, *pk*<sup>U</sup> , *pk*<sup>U</sup> , *pk*<sup>U</sup> ) = 1 C1.Open(*crs* (1) com, (, prev, <sup>1</sup> , ), prev upd , prev upd ) = 1 C1.Open(*crs* (1) com, (, prev, next 1 , ), ′ upd, ′ upd) = 1 SIG.Vfy(*pk*fix O , fix, (fix, U)) = 1 SIG.Vfy(*pk*upd,prev P , prev upd , (prev upd , prev)) = 1 SIG.Vfy(*pk*cert O , cert,prev P , (*pk*prev P , prev P )) = 1 prev = PRF(, − 1), = PRF(, ), = <sup>2</sup> *sk*<sup>U</sup> + <sup>1</sup> *pk*<sup>U</sup> = *sk*<sup>U</sup> 1 , <sup>1</sup> = 1 1 , = 1 , = 1 ⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎭ (7.13)

As a minor detail, the parties mutually exchange various "administrative" information which is check for validity. The PoS sends its certificate *cert*<sup>P</sup> as part of the first message to show that it is a valid member of the system or the user aborts before responding to the DS challenge. Vice versa, the PoS aborts after the second message, if the user turns out to be blacklisted due to the fraud-detection ID being listed in *bl*.

#### **7.3.3 Disbursement**

The task Disburse (cp. Figs. 7.21 and 7.22) complements the task Deposit and is executed between a user and operator to disburse all points on a wallet which have been deposited before. As detailed out in the system definition (cp. Section 4.3.3) the given instantiation is tailored to the post-payment scenario from Section 2.3.3.

Unsurprisingly, the implementation of Disburse is very similar to Deposit and actually simpler: both parties are identified and thus certain checks of validity do not require a ZK

Figure 7.21: The Protocol P5C (cont. from Fig. 7.1) – Task Disburse

proof, the price equals the previous balance and thus no additional input is required which allows the protocol to be non-interactive, no new serial number needs to negotiated as the wallet is destroyed and no prove-participation tag is necessary. Accordingly, the objectives of Disburse are a subset of the objectives of Deposit:


We refer the reader to the previous section on Disburse for a description. Please note, that the second message still contains a ZK-proof , because the updatable commitment upd which


Figure 7.22: The Core Protocol for Task Disburse (used by Fig. 7.21)

contains the previous balance prev is only unveiled indirectly in order to assert unlinkability to the previous transaction. More precisely, P3 is used to compute a proof for a statement *stmnt* from the language (3) *gp* defined by

 (3) *gp* ≔ ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ *pk*<sup>U</sup> *pk*fix O *pk*cert O prev 2 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ | | | | | | | | | | | | | | | | | | | | | | | | ∃ , ,*sk*U, <sup>1</sup> ∈ ℤ ; prev, prev, , , <sup>1</sup> , prev upd , fix ∈ <sup>1</sup> ; *pk*prev P = (*pk*upd,prev P , *pk*rc,prev P ) ∈ (<sup>3</sup> 1 × <sup>2</sup> ) × (<sup>2</sup> 1 × <sup>3</sup> 2 ) prev upd , fix ∈ <sup>2</sup> ; prev upd , cert,prev P , fix ∈ <sup>2</sup> 2 × <sup>1</sup> ; <sup>U</sup> ∈ 2 , prev P ∈ 1 ∶ C1.Open(*crs* (1) com, (, *pk*<sup>U</sup> ), fix, fix) = 1 C1.Open(*crs* (1) com, (, prev, <sup>1</sup> , ), prev upd , prev upd ) = 1 SIG.Vfy(*pk*fix O , fix, (fix, U)) = 1 SIG.Vfy(*pk*upd,prev P , prev upd , (prev upd , prev)) = 1 SIG.Vfy(*pk*cert O , cert,prev P , (*pk*prev P , prev P )) = 1 prev = PRF(, − 1), = PRF(, ), = <sup>2</sup> *sk*<sup>U</sup> + <sup>1</sup> *pk*<sup>U</sup> = *sk*<sup>U</sup> 1 , <sup>1</sup> = 1 1 , = 1 , = 1 ⎫ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎬ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎭ (7.14)

The proof is a simplified version of the one in the Deposit protocol. The balance prev and the public user key *pk*<sup>U</sup> are now in the statement and not in the witness and nothing needs to be proven about ′ upd and *pk*<sup>U</sup> .

## **7.4 Utility Tasks**

#### **7.4.1 Double-Spending Detection and Guilt Verification**

The double-spending tag ds generated by the PoSes are periodically transmitted to the operator's database which is regularly checked for two double-spending tags ds = (, , <sup>2</sup> ), ′ ds = (′ , ′ , ′ 2 ) which are associated to the same fraud-detection ID = ′ . If the database contains two such tags, the operator can use the task DetectDS (see Fig. 7.23) to extract the PID *pid*<sup>U</sup> of the user to which these double-spending tags belong as well as a proof that the user is guilty. For an explanation of the double-spending detection mechanism see Section 7.1.4. At the bottom line, two double-spending tags with the same fraud-detection ID denote two points on the same line whose slope is the secret key *sk*<sup>U</sup> of a user. The secret key does not only establish the fraudster's identity but also serves as the proof of guilt. Any party can run the task VerifyGuilt (cp. Fig. 7.24) to check the validity of a pair (*pid*<sup>U</sup> , ). The task is implemented

**UC-Protocol** P5C **(cont.) – Task** DetectDS Operator input: (detect\_ds, ds, ′ ds) (1) Parse (, , <sup>2</sup> ) ≔ ds and (′ , ′ , ′ 2 ) ≔ ′ ds. (2) If ≠ ′ or <sup>2</sup> = ′ 2 , output (*pid*<sup>U</sup> = ⊥, = ⊥) to operator and terminate. (3) *sk*<sup>U</sup> ≔ ( − ′ )/(<sup>2</sup> − ′ 2 ) mod . (4) *pk*<sup>U</sup> ≔ *sk*<sup>U</sup> 1 . (5) Receive *pid*<sup>U</sup> from the bulletin-board Fbb for *pk*<sup>U</sup> ; if *pid*<sup>U</sup> = ⊥, set ≔ ⊥, else ≔ *sk*U. Operator output: (detected\_ds, *pid*<sup>U</sup> , )

Figure 7.23: The Protocol P5C (cont. from Fig. 7.1) – Task DetectDS

### **UC-Protocol** P5C **(cont.) – Task** VerifyGuilt

Party input: (verify\_guilt, *pid*<sup>U</sup> , )


Figure 7.24: The Protocol P5C (cont. from Fig. 7.1) – Task VerifyGuilt

as local algorithm as the check <sup>1</sup> = *pk*<sup>U</sup> is all what is needed. For honest users who kept their secret key securely away protection against false accusation follows from the hardness of the DLOG.

### **7.4.2 Wallet Blacklisting**

The task BlacklistWallet (cp. Figs. 7.25 and 7.26) executed between the dispute resolver and operator is used to recover the sequence *bl* of fraud-detection IDs of a wallet and thereby allows blacklisting. The implementation is apparent as the blacklisting tag bl is an encryption under the public key of the dispute resolver (cp. Section 7.1.4). Remember that we assume that the dispute resolver and operator agreed out-of-band what user is going to be blacklisted. After the dispute resolver has decrypted bl it first checks, if the decrypted user key *pk*<sup>U</sup> equals the expected key *pk*′ U . Together with the CCA-security of the encryption scheme this rules out malleability attacks which might try to trick the dispute resolver to recover the frauddetection IDs of different (possibly innocent) user than assumed. After that the dispute resolver

#### **UC-Protocol** P5C **(cont.) – Task** BlacklistWallet Operator input: (blacklist\_wallet, bl) (1) At the operator side: (a) If *bl* (bl) is undefined, output (blacklisted\_wallet, ∅) to operator and halt. (b) Call Fmsg with (establish-session, ident, *pidDR*, blacklist\_wallet). (2) At the dispute resolver side: Receive (establishing-session,*ssid*, *pid*<sup>O</sup> , blacklist\_wallet) from Fmsg. Dispute resolver output: (blacklisting\_wallet) Dispute resolver input: (blacklisting\_wallet, *pid*′ U ) (3) At the dispute resolver side: (a) Load the internally recorded (*pkDR*,*skDR*). ⊥ (b) Receive *pk*′ U from the bulletin-board <sup>F</sup>bb for PID *pid*′ U . ⊥ (c) Call Fmsg with (accept,*ssid*). (4) At the operator side: Receive (accepted,*ssid*) from Fmsg. (5) Both sides: Run the code of BlacklistWallet between the dispute resolver and the operator (see Fig. 7.26) using (send,*ssid*, … ) of Fmsg for messaging: ((OK), (*bl* )) <sup>←</sup> BlacklistWallet ⟨*DR*(*pkDR*,*skDR*, *pk*′ U ), O(bl)⟩ . (6) At the operator side: (a) Redefine *bl* (bl) ≔ *bl* . (b) Call Fmsg with (close,*ssid*). (7) At the dispute resolver side: Receive (closed,*ssid*) from Fmsg.

Dispute resolver output:(blacklisted\_wallet)

Operator output: (blacklisted\_wallet, *bl* )

⊥ If this does not exist, abort.

Figure 7.25: The Protocol P5C (cont. from Fig. 7.1) – Task BlacklistWallet

Figure 7.26: The Core Protocol for Task BlacklistWallet (used by Fig. 7.25)

reconstructs the wallet ID from its chunks and evaluates the PRF (bound + 1) times. Since each of the chunks ′ is small ( ′ < ), the dispute resolver can compute the discrete logarithms of (′ in a reasonable amount of time. This algorithm is also not time-critical and is expected to be executed only a few times. At the end, the operator internally redefines *bl* (bl) ≔ *bl* with the receives blacklist *bl* to mark the blacklisting tag bl as blacklisted.

**A tempting but insecure alternative** Skipping ahead to the security proof, we would like to point out an alternative implementation. At first glance, it seems to be sufficient, if the dispute resolver decrypts the blacklisting tag bl, only checks if the expected user key has been decrypted and then sends back the decrypted cleartext bl to the operator, but leaves the rest of the work to the operator. This seems tempting, because it minimizes the computational work for the trusted third party (the dispute resolver) and puts the operator is in charge of computing the DLOGs and evaluating the PRF. However, this shift of the work load let the security proof fail. The operator must not learn the (secret) wallet ID even for blacklisted users. The crux of the matter is that for honest users, who might also become blacklisted, the pseudo-random fraud-detection IDs are replaced by truly random numbers in the security proof as it is also the case in the ideal model to argue unlinkability. If the operator is corrupted, this would require to come up with a wallet ID that explains a sequence of truly random numbers, which have been output by previous transaction, as the image of the PRF. Hence, it is admissible that the dispute resolver sends an evaluated sequence of the PRF to the operator, but disclosure of the seed is unacceptable. Actually, for the same reason the security proof fails

### **UC-Protocol** P5C **(cont.) – Task** RecalculateBalance

Operator input:(recalculate\_balance, *bl*, rc)


Operator output:(recalculated\_balance, bill)

⊥ If this does not exist, abort.

Figure 7.27: The Protocol P5C (cont. from Fig. 7.1) – Task RecalculateBalance

RecalculateBalance(*pk*<sup>O</sup> ,*sk*O, *bl* , rc) parse (*pk*fix O , *pk*cert O , *pk*upd O , *pk*rc,sig O , *pk*rc,enc O ) ≔ *pk*<sup>O</sup> parse (*sk*fix <sup>O</sup> ,*sk*cert <sup>O</sup> ,*sk*upd O ,*sk*rc,sig O ,*sk*rc,enc <sup>O</sup> ) ≔ *sk*<sup>O</sup> rc ≔ {rc ← ENC2.Dec(*sk*rc,enc <sup>O</sup> , rc) | | rc ∈ rc} valid rc ≔ {(, , , *pk*rc P , rc) ∈ rc | | SIG.Vfy(*pk*rc P , rc, (, , 1 )) = 1} ≔ {(, ) | | ∃ rc = (, , , ⋅, ⋅) ∈ valid rc ∧ ∈ *bl*} bill ≔ ∑(,)∈ return (bill)

Figure 7.28: The Core Protocol for Task RecalculateBalance (used by Fig. 7.27)

under adaptive corruption. Interestingly, the shift of work load from the dispute resolver to the operator which unveils the wallet ID to the operator and turns out to be formally insecure does not seem to allow for any "real-world attack".⁴

#### **7.4.3 Balance Recalculation**

The task RecalculateBalance (cp. Figs. 7.27 and 7.28) complements wallet blacklisting and allows to match the set of collected recalculation tag rc with the set of fraud-detection IDs *bl* of a blacklisted wallet and thereby re-calculate the true balance of a wallet while taking parallel wallet states due to double-spending into account. The implementation is straightforward: The operator decrypts the recalculation tags, verifies their validity, i.e. drops those which are invalid, and uses the remaining set to sum over the prices of those tags whose fraud-detection ID is contained in *bl* .

⁴ At least, we could not come up with one.

As already discussed for the definition of the system RecalculateBalance only gives very weak guarantees (cp. Sections 4.4.3 and 5.4.2). However, here we would like to point out another detail that should trigger action in the "real world" out of the scope of the UC-model and thus cannot be appropriately described⁵ by pseudo-code. The serial number is assumed to uniquely identify a single transaction and only occur once. If the operator encounters the same serial number twice, this is a clear indicator that at least one PoS must be corrupted and RecalculateBalance should throw an exception. In practice, this event should lead to further investigations and actions. The operator should try to identify the corrupted PoS and exclude it from the network.

Please note, that a corrupted PoS might invent recalculation tags for the same serial number, validly sign them and send them to the operator. Those duplicates do not even need to belong to transactions that have taken place in the physical world.⁶

#### **7.4.4 Prove of Participation**

The task ProveParticipation (cp. Fig. 7.29) is used by users to prove to the violation enforcer that they participated in a specific Deposit transaction with a PoS. To this end, the violation enforcer has been triggered by the PoS to physically identify the offending user, e.g. by taking a photo. Remember, this task is probably only a required in specific scenarios such as post-payment scenarios in that users are not physically prevented from gaining whatever benefit the system offers without paying first (cp. Section 2.3.3). Moreover, due to physical limits it might be impossible to exactly identify a single user, but accidentally suspicious several users of which all but one are innocent. The structure of prove-participation tags is described in Section 7.1.4 and the implementation of ProveParticipation is straightforward. The presumingly guilty user is challenged on a set of prove-participation tags pp which are connected to the offending incident in a timely and spatial manner and which must be provided by the same PoS which triggered the physical identification. The suspected user then may pick one of the proposed tags and unveil it. The binding property of the commitment underlying pp asserts that only the legitimate owner can do so successfully and also only for the associated transaction.

⁵ Of course, there are options to extend the expressiveness of what can be described by pseudo-code. For example, introduce a new ideal functionality that provides a "handle" to the physical world and is used as a black-box. But this seems to be much of an overkill for a rather trivial issue.

⁶ Note that part of the solution for the other issues is to also make the user sign the recalculation tags. This mitigates the problem, but does not entirely solve it. Still the environment could invent an user and corrupt it. The solution only prevents to create valid recalculation tags in cooperation with honest users.

*ᵃ* N.b., pp = ⊥ may hold, if pp(pp) is undefined.

Figure 7.29: The Protocol P5C (cont. from Fig. 7.1) – Task ProveParticipation

```
VerifyWallet(pkO
                      , pkU
                             ,  )
parse (pkfix
           O
             , pkcert
                 O
                    , pkupd
                        O
                           , pkrc,sig
                               O
                                    , pkrc,enc
                                        O
                                            ) ≔ pkO
parse (, , next, , U, upd, upd, upd,certP, fix, fix, fix, , next
                                                                 1
                                                                     ) ≔ 
parse (pkP
            , P, cert
                  P
                      ) ≔ certP
parse (pkupd
           P
              , pkrc
                  P
                   ) ≔ pkP
if
     C1.Open(crs(1)
                    com, (
                           1
                            , pkU
                                  ), fix, fix) = 0 ∨
     SIG.Vfy(pkfix
                   O
                     , fix, (fix, U)) = 0 ∨
     C1.Open(crs(1)
                    com, (
                           1
                            , 
                               1
                                , 

                                   next
                                   1
                                  1
                                      , 
                                          next
                                         1
                                             ), upd, upd) = 0 ∨
     SIG.Vfy(pkupd
                   P
                      , upd, (upd, )) = 0 ∨
     SIG.Vfy(pkcert
                   O
                      , cert
                        P
                            , (pkP
                                  , P)) = 0 ∨
     PRF(, next − 1) ≠ 
then return 0
else return 1
```
Figure 7.30: Helper Algorithm VerifyWallet

### **7.4.5 Wallet Verification**

The algorithm VerifyWallet (cp. Fig. 7.30) is not a task by itself, but only a helper algorithm that is used inside of IssueWallet and Deposit (cp. Figs. 7.13, 7.16 and 7.17). A user can verify with this algorithm that the wallet he stores at the end of a transaction is valid. In particular, the algorithm verifies that the commitments fix and upd are valid and contain the values they are supposed to contain, that fix is a valid signature under *sk*fix <sup>O</sup> of fix and U, that upd is a valid signature under *sk*<sup>P</sup> of upd and , that the certificate *cert*<sup>P</sup> containing *pk*<sup>P</sup> is valid and that the fraud-detection ID was calculated using the correct values.

## **8 Security Theorem and Proof**

In this chapter we show that P5C UC-realizes Fapc in the (FCRS,Fbb)-hybrid model for static corruption. More precisely, we show the following theorem:

**Theorem 8.1 (Security Statement)** *Assume that the SXDH-problem is hard for gp* ≔ (<sup>1</sup> , <sup>2</sup> , T, , , <sup>1</sup> , <sup>2</sup> )*, the* bound*-DDHI problem is hard for* <sup>1</sup> *, the DLOG-problem is hard for* <sup>1</sup> *and our building blocks (NIZK, commitment schemes, signature scheme, encryption schemes and PRF) are instantiated as described in Section 6.2. Then*

$$
\pi\_{\mathsf{P5C}}^{\mathcal{F}\_{\mathsf{CRS}}, \mathcal{F}\_{\mathsf{bb}}, \mathcal{F}\_{\mathsf{mag}}} \geq\_{\mathsf{UC}} \mathcal{F}\_{\mathsf{apc}} \tag{8.1}
$$

*holds under static corruption of either*


Proof Follows from Theorems 8.2 and 8.28.

Informally, this means the ideal model and our protocol are indistinguishable and therefore provide the same guarantees regarding security and privacy. Please note that the hardness of the DLOG-problem is already implied by the SXDH-assumption.

This chapter is organized as follows. In Section 8.1 we discuss the corruption model, especially the restrictions on the set of corrupted parties and why this limited corruption model seems not to be a severe restriction from a practical vantage point. In Section 8.2 we give a brief outline of the proof on a high level. The actual proof for Theorem 8.1 is given in two parts:


Both sections follow the usual approach and prove the statement in a sequence of hybrids.

## **8.1 Adversarial Model**

Firstly, we only consider security under static corruption. This is a technical necessity to enable the use of PRFs to generate fraud-detection IDs. With adaptive corruption the simulator would be required to come up with a consistent seed for the PRF that could explain the up to the point of corruption uniformly and randomly drawn fraud-detection IDs. We deem static corruption to provide a sufficient level of security as a statically corrupted party may always decide to interact honestly first and then deviate from the protocol later (cp. Definition 3.14 and the discussion thereafter). Adaptive corruption is tightly related to deniability which is not part of our desired properties. Instead, features like blacklisting, prove-of-participation and balance recalculation are quite contrary to deniability. Obviously, traceability of blacklisted users requires that users are indisputable bound to their past transactions.

Of course, there might be valid applications that do not require these features but demand deniability. In these cases, the tasks BlacklistWallet, RecalculateBalance and ProveParticipation could be removed from the system. This would allow to use a truly uniform random distribution instead of a PRF for the fraud-detection IDs and the encryption for the key escrow mechanism could be dropped, too. A close look at the security proof unveils that it holds under adaptive corruption after these modifications. In [Nag+17] the BBA+-scheme, which does not include these features, is shown to provide a security feature which is called *backwardand forward-privacy*. Although adaptive security and backward- and forward-privacy are not directly comparable due to formal reasons, the latter is even stronger than adaptive security on an informal level as it guarantees users to be unlinkable in future transactions even after a corruption.

Secondly, we separately consider operator security and user security which means that Z is only allowed to corrupt certain restricted sets of parties (cp. Theorem 8.1). For operator security either (1) a subset¹ of users or (2) all users and a subset of PoSes, operator and violation enforcer is allowed to be corrupted. For user security and privacy either (3) a subset of PoSes, operator and violation enforcer or (4) all of PoSes, operator and violation enforcer as well as a subset of users might be corrupted. It is best to picture the cases inversely: To prove operator security we consider a scenario in which at least some parties at the operator's side remain honest; to prove user security we consider a scenario in which at least some users remain honest. Please note that both scenarios also commonly cover the case in which all parties are corrupted. However, this extreme case is tedious as it is trivially simulatable.

One might believe that the combination of all cases above should already be sufficient to guarantee privacy, security and correctness under arbitrary corruption. For example, case (4)

¹ Note that "subset" also includes the empty or full set.

guarantees that privacy and correctness of accounting are still provided for honest users, even if all of the operator's side and some fellow users are corrupted. This ought to be the worst case from an honest user's perspective. Further note that the proof of indistinguishability quantifies over all environments Z. This includes environments that—still in case (4)—first corrupt all the operator's side but then let some (formally corrupted) parties follow the protocol honestly.

From a technical point the crux are the ZK proofs which either can be extracted or simulated but not both in the same experiment for different transactions involving an honest user and a corrupted PoS in one transaction and vice versa a corrupted user and an honest PoS in another transaction. Note, that this problem vanishes in the cases (2) and (4), because in these cases all parties belonging to one side are completely corrupted and interactions with corrupted parties of the other side are trivially simulatable. This suggests as if the truly mixed case should fail due to some sort of malleability attack involving honest users at one side, a man-in-the-middle, who cobbles ZK proofs, and honest PoSes at the other side. One might expect to find an adversary who merges an ensemble of wallets in a way such that the resulting wallet states cannot be mapped in the ideal model. However, we were not able to construct such an adversary. Even more interestingly, as explained in Chapter 10 using a non-shrinking commitment scheme in exchange for less efficiency allows to waive extractable ZK proofs and thus enables a proof under arbitrary corruption. One would expect that this observation should help to "spot the weak point". All in all, it seems that the proof of indistinguishability (for our proposed, *efficient* implementation with shrinking commitments) under arbitrary corruption only fails due to a formal problem but does not allow for a "practical" attack in the real world.

## **8.2 Proof Outline**

As mentioned above we separately prove operator security with respect to an environment Zop‐sec as well as user security and privacy with respect to an environment Zuser‐sec. Both proofs are conducted by explicitly specifying a simulator S op‐sec P5C and S user‐sec P5C , resp., for the ideal experiments EXECP5C,<sup>S</sup> op‐sec P5C ,Zop‐sec (1 ), EXECP5C,Suser‐sec P5C ,Zuser‐sec (1 ), resp. For each scenario we define a sequence of hybrid experiments together with simulators S and protocols . Each hybrid is of the form

$$\mathcal{H}\_{\mathbb{I}} := \mathsf{EXEC}\_{\pi\_{\mathbb{I}}, \mathbb{S}\_{\mathbb{I}}, \mathbb{Z}}(1^n). \tag{8.2}$$

The first hybrid is identical to the real experiment and the last hybrid is identical to the ideal experiment. The general idea is that the protocol for honest parties gradually declines from the real protocol <sup>0</sup> = P5C to a dummy protocol, which does nothing but relay in- and outputs. At the same time S progresses from a dummy adversary S<sup>0</sup> = D to the final simulator, which can be split up into the ideal functionality Fapc and S op‐sec P5C or S user‐sec P5C , resp. Instead

of directly proving indistinguishability of the real and ideal experiment we can break the proof down into showing indistinguishability of each pair of consecutive hybrids. We achieve this by demonstrating that whenever Zop‐sec or Zuser‐sec, resp., can distinguish between two consecutive hybrids with non-negligible probability this yields an efficient adversary against one of the underlying cryptographic assumptions. The indistinguishability between the real and ideal experiment follows from the pairwise indistinguishable of consecutive hybrids.

The simulator for operator security S op‐sec P5C and the simulator for user security S user‐sec P5C have a good share of common code, because some combinations of corrupted parties occur within particular tasks for both settings. This is unsurprising if one considers that not much seems to lack for an arbitrary corruption setting. The shared code for the common part is presented in both simulators. Naturally, this causes redundancy, but this way each simulator is complete and we avoid a lot of confusing references. However, the hybrids which transfer the real experiment into the ideal experiment are only presented once. They are only defined and proven for operator security (cp. Section 8.3) and re-used for user security (cp. Section 8.4). However, there are still segments within the sequence of hybrids that differ between operator security and user security.

For the proof of operator security (cp. Section 8.3) input privacy does not pose any problem. The user learns nearly everything about the operator as part of its prescribed output and thus simulation of mostly all messages is perfectly enabled. The crucial point is to prove that no user can deviate from the protocol and thereby cheat the operator. To this end, S op‐sec P5C basically uses the extraction property of the zero-knowledge scheme to watch messages from the corrupted users for discrepancies.

Contrarily, the proof of user security (cp. Section 8.4) follows a different spirit. In this case, input privacy of the user is the crucial point. For these reasons, most hybrids replace messages from the user by information-theoretically "empty" messages that are independent from any user secret.

## **8.3 Proof of Operator Security**

In this section we show the following theorem.

**Theorem 8.2 (Operator Security)** *Under the assumptions of Theorem 8.1*

$$
\pi\_{\text{P5C}}^{\mathcal{F}\_{\text{CRS}}} \mathcal{F}\_{\text{bb}}^{\mathcal{F}\_{\text{mb}}} \geq\_{\text{UC}} \mathcal{F}\_{\text{apc}} \tag{8.3}
$$

*holds under static corruption of*

*(1) a subset of users, or*

#### *(2) all users and a subset of PoSes, operator and violation enforcer.*

The definition of the UC-simulator S op‐sec P5C for Theorem 8.2 can be found in Figs. 8.2 to 8.18. Please note that while the real protocol P5C lives in the (FCRS,Fbb,Fmsg)-model the ideal functionality Fapc has no CRS. The CRS simulated by S op‐sec P5C , giving it a lever to extract the ZK proofs P1, P2, and P3 and to equivocate the commitments C2 and C4.

While the protocol executes, the simulator S op‐sec P5C records certain information similar to what the parties or the ideal functionality internally record, namely the map of simulated proveparticipation tags ̄ pp, and the simulated transaction graph *TRDB*. Basically, ̄ pp and *TRDB* correspond to pp and *TRDB* resp., but exist in the head of the simulator and are augmented by additional information. The simulator uses them as "lookup tables" to keep up a consistent simulation in later parts of the protocol. Obviously, this implies that information is stored redundantly: In the head of S op‐sec P5C as ̄ pp and *TRDB* and inside the ideal functionality <sup>F</sup>apc (in case of *TRDB*) or the environment (in case of pp for a corrupted user). A crucial part of the security proof is to show that these sets stay in sync.

Before starting with the security proof, we explain the *Simulated Transaction Graph TRDB*. This *Simulated Transaction Graph* resembles the Ideal Transaction Graph (cp. Definition 5.1) but augments each node by the in- and out-commitments (in upd, in fix) and (out upd, out fix ) from the real protocols. A *Simulated Transaction Entry trdb* has the form

$$\begin{aligned} \overline{\text{trdb}} &= (\text{s}^{\text{prev}}, \text{s}, \rho, \propto, \lambda, \text{pd}\_{\text{df}}, \text{pid}\_{\text{p}}, \text{p}, b, \\ U\_1^{\text{next}}, \omega\_{\text{ds}}, \omega\_{\text{rc}}, \omega\_{\text{pp}}, \\ c\_{\text{fix}}^{\text{in}}, d\_{\text{fix}}^{\text{in}}, m\_{\text{fix}}^{\text{in}}, c\_{\text{upd}}^{\text{in}}, d\_{\text{upd}}^{\text{in}}, m\_{\text{upd}}^{\text{in}}, \\ c\_{\text{fix}}^{\text{out}}, d\_{\text{fix}}^{\text{out}}, m\_{\text{fix}}^{\text{out}}, c\_{\text{upd}}^{\text{out}}, d\_{\text{upd}}^{\text{out}}, m\_{\text{upd}}^{\text{out}}) \end{aligned} \tag{8.5}$$

with , and with equal suffixes denoting a commitment, its decommitment information and the opening in the implicit message space (see Fig. 8.1). These commitments are the fixed and updatable part of the wallet before and after the transaction (cp. Chapter 7). At the beginning of a transaction in the scope of Deposit or Disburse the user loads his token prev which contains two commitments fix and prev upd , randomizes the commitments and at the end the user possesses

Figure 8.1: An entry *trdb* ∈ *TRDB* visualized as an element of a directed graph

Figure 8.2: The Simulator for Operator Security



RegisterUser (for corrupted user): Upon receiving input (register, *pk*<sup>U</sup> ) from Zop‐sec for Fbb in the name of U with PID *pid*<sup>U</sup> , and if ̄ keys(*pid*<sup>U</sup> ) is undefined, call Fapc with input (register) in the name of the same user, ignore the subsequent leak (registering\_user, *pid*<sup>U</sup> ) from Fapc and append *pid*<sup>U</sup> ↦ (*pk*<sup>U</sup> , ⊥) to ̄ keys.*ᵇ*

Figure 8.3: The Simulator for Operator Security (cont. from Fig. 8.2)

*ᵃ* Corrupted PoSes essentially have two options: They can either register "some" public key at the bulletin board or not. (N.b., the public key does not need to be honestly generated.) If they register their public keys, then they are regarded as registered from the perspective of the real protocols. Hence, the simulator must also register the PoSes with Fapc, otherwise Fapc would subsequently abort, but the real protocols do not.

*ᵇ* Corrupted users essentially have two options: They can either register a public key at the bulletin board or not. (N.b., the public key does not need to be honestly generated.) If they register their public keys, then they are regarded as registered from the perspective of the real protocols. Hence, the simulator must also register the user with Fapc, otherwise Fapc would subsequently abort, but the real protocols do not.

the first place. Contrary, if PoS had registed at the bulletin-board, the real protocol does not abort. However, in this case S user‐sec P5C would have (silently) defined ̄ keys(*pid*<sup>P</sup> ) and registered the PoS with Fapc and thus Fapc does not abort neither.

Figure 8.4: The Simulator for Operator Security (cont. from Fig. 8.2)

#### **Simulator** S op‐sec P5C **(cont.)**

IssueWallet (for honest operator and honest user): Upon receiving leakage (issuing\_wallet) from Fapc and being asked to provide bl …

> (1) ″ <sup>R</sup>← ℤ . (2) bl <sup>←</sup> ENC1.Enc(*pkDR*, (⏞ℓ+2 1, … , 1)).


Figure 8.5: The Simulator for Operator Security (cont. from Fig. 8.2)

two updated commitments fix, upd which are stored in again. We call the initial commitments the *in*-commitments of the transaction and the resulting commitments the *out*-commitments.

**Definition 8.3 (Simulated Transaction Graph (informal))** *The set TRDB* = {*trdb* } *with trdb defined as in Eq.* (8.5) *is called the* Simulated Transaction Graph*. It inherits the graph structure of the Ideal Transaction Graph and augments each edge by additional labels, called the* in-commitments *and* out-commitments*.*

Two remarks are in order:


The augmented information gives an alternative set of edges where two transactions *trdb* and *trdb*′ are connected if (out upd, out fix ) corresponds to (in upd ′ , in fix ′ ).² The hybrids which are specific to operator security introduce additional "sanity checks" on this alternative graph structure: if the sanity check holds, both transaction graphs are still in sync and the simulator proceeds; if the

² N.b.: Commitments "correspond" if they are re-randomizations of each other, i.e. if they have the same message.

RegisterOp and RegisterDR previously, otherwise Fapc would already have aborted.

Figure 8.6: The Simulator for Operator Security (cont. from Fig. 8.2)

Figure 8.7: The Simulator for Operator Security (cont. from Fig. 8.2)

if the user only did so at corrupted PoSes that undermine double-spending detection.

*ᶜ* Here, we only consider an honest operator.

*ᵈ* N.b.: These assignments exist. The operator/PoS must have called RegisterOp/RegisterPOS previously, otherwise Fapc would already have aborted.


Figure 8.8: The Simulator for Operator Security (cont. from Fig. 8.2)

Figure 8.9: The Simulator for Operator Security (cont. from Fig. 8.2)

*ᵃ* N.b., even if the user commits double-spending no "useful", previous double-spending tag may exist in ds, if the user only did so at corrupted PoSes that undermine double-spending detection.

*ᵇ* N.b.: Fapc does not always ask for the next serial number. If the corrupted user re-uses an old token, then Fapc internally picks the next serial number which has already been determined in some earlier interaction. Hence, the S op‐sec P5C only needs to provide the next serial number, if the chain of transactions is extended.

*<sup>ᶜ</sup>* The hidden recalculation tag is of the form rc = (, , , *pk*rc P , rc) ∈ <sup>1</sup> × <sup>1</sup> × ℤ × (<sup>2</sup> 1 × <sup>3</sup> 2 ) × (<sup>2</sup> 2 × <sup>1</sup> ), e.g., rc ≔ (1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1) would be a good choice.

Figure 8.10: The Simulator for Operator Security (cont. from Fig. 8.2)

Figure 8.11: The Simulator for Operator Security (cont. from Fig. 8.2)

sanity check fails, the adversary has caused the transaction graphs to fall apart and the simulator immediately gives up the simulation. Each sanity check is related to the security of one of the building blocks or cryptographic assumptions. Together, these checks collectively assert that the alternative graph structure of the Simulated Transaction Graph coincides with the Ideal Transaction Graph and thus no efficient adversary can deviate from the Ideal Transaction Graph.

We proceed by giving concrete (incremental) definitions of all hybrids op‐sec .

**Hybrid op‐sec** 0 **(The real experiment)** The hybrid op‐sec 0 is defined as

$$\mathcal{H}\_0^{\text{op-sec}} := \mathsf{EXEC}\_{\pi\_0^{\text{op-sec}}, \mathcal{S}\_0^{\text{op-sec}}, \mathsf{Z}^{\text{op-sec}}}(1^n) \tag{8.6}$$

with S op‐sec <sup>0</sup> ≔ D being identical to the dummy adversary and op‐sec <sup>0</sup> ≔ P5C. Hence, op‐sec 0 denotes the real experiment.

**Hybrid op‐sec** 1 **(Fake setup)** In hybrid op‐sec <sup>1</sup> we modify S op‐sec 1 such that *crs*pok is generated by SetupExt, and *crs* (2) com as well as *crs* (4) com are generated by SetupSim. S op‐sec 1 initializes the simulated transaction graph *TRDB* and ̄ keys and ̄ pp as "empty" maps. Additionally, <sup>S</sup> op‐sec 1 invokes an internal instance of F sim msg instead of the external instance Fmsg and reroutes all

*ᵇ* N.b., even if the user commits double-spending no "useful", previous double-spending tag may exist in ds, if the user only did so at corrupted PoSes that undermine double-spending detection.

*ᶜ* N.b.: This assignment exists. The operator must have called RegisterOp previously, otherwise Fapc would already have aborted.

*ᵈ* Step 1d is only executed, if the user commits double-spending.

*<sup>ᵉ</sup>* The hidden recalculation tag is of the form rc = (, , , *pk*rc P , rc) ∈ <sup>1</sup> × <sup>1</sup> × ℤ × (<sup>2</sup> 1 × <sup>3</sup> 2 ) × (<sup>2</sup> 2 × <sup>1</sup> ), e.g., rc ≔ (1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1) would be a good choice.

Figure 8.12: The Simulator for Operator Security (cont. from Fig. 8.2)

Figure 8.13: The Simulator for Operator Security (cont. from Fig. 8.2)

Hence, the S P5C only needs to provide the next serial number, if the chain of transactions is extended. *<sup>ᶜ</sup>* The hidden recalculation tag is of the form rc = (, , , *pk*rc P , rc) ∈ <sup>1</sup> × <sup>1</sup> × ℤ × (<sup>2</sup> 1 × <sup>3</sup> 2 ) × (<sup>2</sup> 2 × <sup>1</sup> ), e.g., rc ≔ (1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1) would be a good choice.

Figure 8.14: The Simulator for Operator Security (cont. from Fig. 8.2)

**Simulator** S op‐sec P5C **(cont.)** DetectDS (for honest operator): (1) Upon receiving leakage (detecting\_ds, ds, ′ ds) from Fapc and being asked to provide (*pid*<sup>U</sup> , ,*result*), … (a) Parse (, , <sup>2</sup> ) ≔ ds and (′ , ′ , ′ 2 ) ≔ ′ ds. (b) If = ′ and <sup>2</sup> ≠ ′ 2 : (i) Set *sk*<sup>U</sup> ≔ ( − ′ )/(<sup>2</sup> − ′ 2 ) mod . (ii) Set *pk*<sup>U</sup> ≔ *sk*<sup>U</sup> 1 . Else, set (*pk*<sup>U</sup> ,*sk*U) ≔ (⊥, ⊥). (c) Set *pid*<sup>U</sup> <sup>≔</sup> ̄ −1 keys(*pk*<sup>U</sup> , ⋅); if ̄ −1 keys is not defined for *pk*<sup>U</sup> , set *pid*<sup>U</sup> ≔ ⊥. (d) If *pid*<sup>U</sup> ≠ ⊥, then set (,*result*) ≔ (*sk*U, OK), else set (,*result*) ≔ (⊥, NOK). (e) Provide (*pid*<sup>U</sup> , ,*result*) to Fapc. (2) Upon receiving leakage (detecting\_ds, *pid*<sup>U</sup> ) from Fapc and being asked to provide , … (a) Set (*pk*<sup>U</sup> ,*sk*U) ≔ ̄ keys(*pid*<sup>U</sup> ).*ᵃ* (b) Provide ≔ *sk*<sup>U</sup> to Fapc. VerifyGuilt (for honest party): Upon receiving leakage (verifying\_guilt, *pid*<sup>U</sup> , ) from Fapc and being asked to provide *result* … (1) Set (*pk*<sup>U</sup> , ⋅) ≔ ̄ keys(*pid*<sup>U</sup> ). (2) If <sup>1</sup> = *pk*<sup>U</sup> , then provide *result* ≔ OK, else *result* ≔ NOK to Fapc. *ᵃ* This assignment exist. (detecting\_ds, *pid*<sup>U</sup> ) is only leaked, if the user truly committed double-spending. In this case Step 5 in Fig. 8.9 and Step 4 in Fig. 8.13 have been called previously. In all other cases the honest

Figure 8.15: The Simulator for Operator Security (cont. from Fig. 8.2)

user and therefore S

op‐sec

P5C knows *sk*<sup>U</sup> anyway.

#### **Simulator** S op‐sec P5C **(cont.)**

BlacklistWallet (for honest operator): Upon receiving leakage (blacklisting\_wallet, , ) from Fapc and being asked to provide , provide ≔ PRF(, ) to Fapc.

RecalculateBalance (for honest operator): Upon receiving leakage (recalculating\_balance, *bl*, fake rc ) from Fapc and being asked to provide deviate …


Figure 8.16: The Simulator for Operator Security (cont. from Fig. 8.2)

input/output accordingly. All calls to the bulletin-board Fbb are handled internally by S op‐sec 1 using the map ̄ keys.

**Hybrid op‐sec** 2 **(Simulate honest keys)** Hybrid op‐sec 2 replaces the code in the tasks RegisterDR, RegisterOp, RegisterPOS and RegisterUser of the protocol op‐sec 2 such that the simulator S op‐sec 2 is asked for the keys instead. Also, if corrupted PoSes or users try to register a (maliciously) generated public key at the bulletin-board Fbb, then S op‐sec 2 calls RegisterPOS or RegisterUser, resp., in order to simultaneously register the parties for Fapc. S op‐sec 2 defines ̄ keys appropriately. This equals the method in which the keys are generated in the ideal experiment.

**Hybrid op‐sec** 3 **(Simulate PoS' certificate)** In hybrid op‐sec 3 the task CertifyPOS is modified. The protocol op‐sec 3 is modified such that the simulator S op‐sec 3 receives the message (certifying\_pos, *pid*<sup>P</sup> , P), creates the certificate *cert*<sup>P</sup> including the signature cert P and records it. Whenever the honest operator or honest PoSes running op‐sec <sup>3</sup> would send *cert*<sup>P</sup> (or cert P ) as part of their messages in the scope of IssueWallet or Deposit, they omit *cert*P. Instead, the simulator S op‐sec 3 injects *cert*<sup>P</sup> into the messages.

**Hybrid op‐sec** 4 **(Simulate wallet signatures and wallet update information)** Hybrid op‐sec 4 replaces the code in the tasks IssueWallet and Deposit of the protocol op‐sec 4 such that the operator/PoS do not create signatures, but the simulator S op‐sec 4 creates the signatures fix, upd and pp resp. and injects them into the messages instead.

Moreover, in Deposit in case of a corrupted user the PoS running op‐sec 4 does not send upd and ″ upd in its final message, but outputs the price to simulator S op‐sec 4 as Fapc would do. Simulator S op‐sec 4 creates upd and ″ upd honestly and injects them into the message.

*ᵇ* If ̄ pp(pp) is undefined, we stipulate ∗ pp = ⊥ and set ∗ pp = <sup>∗</sup> = ⊥.

Figure 8.17: The Simulator for Operator Security (cont. from Fig. 8.2)

*pk*<sup>U</sup> *ᶜ* ∗ pp = ⊥ holds, if and only if pp is made up by the environment. In this case and if the PoS is corrupted, too, Step 4 will be executed.

*ᵈ* N.b., Fapc only leaks this, if pp has not legitimately been issued and the PoS with *pid*<sup>P</sup> is corrupted, too.

Figure 8.18: The Simulator for Operator Security (cont. from Fig. 8.2)

**Hybrid op‐sec** 5 **(Simulate serial number)** op‐sec <sup>5</sup> modifies the tasks of IssueWallet and Deposit in case of a corrupted user. The code of op‐sec 5 for the operator/PoS is modified such that it does not send ″ ser in the scope of IssueWallet or Deposit. Instead S op‐sec 5 runs (″ ser, ̄ ser) ← C4.CommitSim(*crs* (4) com) and injects ″ ser into the message. Moreover, op‐sec 5 for the operator/PoS is modified such that it uniformly and independently picks <sup>R</sup>← ℤ and passes to S op‐sec 5 as part of the final message. S op‐sec 5 calculates ″ ≔ ⋅ (′ ) −1, executes ″ ser ← C4.Equivoke(*crs* (4) com,*td*eqcom, ″, ″ ser, ̄ ser) and injects ″ together with ″ ser into the messages from operator/PoS to the user.

**Hybrid op‐sec** 6 **(Recover wallet ID and scrutinize equations)** When S op‐sec 6 receives a NIZK proof in the scope of IssueWallet, Deposit and Disburse, it extracts the witness and recovers ≔ ″ + ∑ ℓ−1 =0 DLOG(′ ) ⋅ .

Moreover, the verification of the proof is moved from op‐sec 6 for the honest operator/PoS to the simulator. If the verification fails, S op‐sec 6 aborts as the operator/PoS running the real protocol would do.

Additionally, S op‐sec <sup>6</sup> checks if the pair of the statement and the extracted witness fulfills the languages (1) *gp* , (2) *gp* , and (3) *gp* resp. If not, S op‐sec 6 give up the simulation with failure event (*E1*).

**Hybrid op‐sec** 7 **(Record Tags)** op‐sec 7 replaces the code protocol op‐sec 7 of the tasks IssueWallet, Deposit and Disburse such that the various tags are not exclusively created by the parties' code but with support from S op‐sec 7 and then recorded by S op‐sec 7 . More precisely, these are

$$
\omega\_{\rm bl} := (\lambda'', \psi\_{\rm bl}) \tag{8.7}
\\
\tag{8.7}
$$

$$
\omega\_{\rm rc} \leftarrow \mathsf{ENC2Enc}(pk\_{\mathcal{O}}^{\rm rc,enc}, \psi\_{\rm rc}) \tag{8.8}
\\
\qquad \qquad \qquad \omega\_{\rm pp} \coloneqq c\_{pk\_{\mathcal{M}}} \tag{8.8}
$$

To this end, op‐sec 7 and S op‐sec 7 are changed in detail as follows.

For the blacklisting tag bl: In the scope of IssueWallet the honest operator does not pick the share ″ of the wallet ID and sends it, but lets S op‐sec 7 pick ″ and inject it into the message. When the honest or corrupted user sends³ bl, S op‐sec 7 removes it from the message and the code of operator is modified such that operator does not expect to receive bl. Instead, the operator asks S op‐sec 7 to provide the final bl which is then output by operator. Also, S op‐sec 7 records bl () ≔ bl as Fapc would do.⁴

.

³ N.b.: S op‐sec 7 controls F sim msg and therefore also sees the message of an honest user that runs op‐sec 7 .

⁴ S op‐sec 7 knows due to hybrid op‐sec 6


In summary, these modifications leak (, ) (for ds-tags), *pid*<sup>P</sup> (for pp-tags) and—in case of a corrupted operator in Deposit—also (for rc-tags). This equals the behavior of the final Fapc (cp. Fig. 4.11, Step 10 and Fig. 4.12, Step 7). On top, op‐sec 7 *provisionally* leaks ′ rc and ′ pp which are still honestly created by op‐sec 7 and simply mirrored back by S op‐sec 7 as rc and pp, resp. This over-leakage is reverted in hybrids op‐sec <sup>26</sup> and op‐sec <sup>27</sup> .

**Hybrid op‐sec** 8 **(Create simulated transaction graph and lookup tables)** When S op‐sec 8 receives a NIZK proof in the scope of IssueWallet, Deposit and Disburse, it uses the extracted witness from hybrid op‐sec 6 to assemble all parts of *trdb* and appends *trdb* to *TRDB*. This also includes bl, ds, rc and pp created in the hybrid op‐sec 7 .

⁵ N.b., for operator security the operator is always honest, i.e. the latter case never holds. However, we explicitly consider this case here, as this allows us to reuse this hybrid as hybrid <sup>7</sup> to prove user security.

When a new entry *trdb* is assembled in the scope of the tasks Deposit or Disburse, S op‐sec 8 compiles the set ds ≔ { ds | (… , , … , ds, … ) ∈ *TRDB*} as <sup>F</sup>apc would do. If there exist matching double-spending tags ds, ′ ds ∈ ds, then set (*pid*<sup>U</sup> , ) ≔ OK with ≔ *sk*<sup>U</sup> to record this incident of double-spending as S op‐sec P5C would do. If *pid*<sup>U</sup> ∈ PIDcorr reconstruct *sk*<sup>U</sup> as S op‐sec P5C would do first and redefine ̄ keys(*pid*<sup>U</sup> ) ≔ (*pk*<sup>U</sup> ,*sk*U) (cp. Step 5 in Fig. 8.9 and Step 4 in Fig. 8.13).

**Hybrid op‐sec** 9 **(Check predecessor)** In the scope of Deposit or Disburse, the simulator S op‐sec 9 looks up the predecessor entry with prev being used as the unique key. If this fails, S op‐sec 9 gives up the simulation with event *E2*.

**Hybrid op‐sec** 10 **(Check updatable part of wallet)** The simulator S op‐sec <sup>10</sup> additionally checks for out upd ∗ ≠ prev upd and gives up the simulation with event *E3*, if the check succeeds.

**Hybrid op‐sec** 11 **(Check wallet ID)** The simulator S op‐sec <sup>11</sup> additionally checks for ≠ ∗ 1 and gives up the simulation with event *E4*, if the check succeeds.

**Hybrid op‐sec** 12 **(Check fixed part of wallet)** The simulator S op‐sec <sup>12</sup> additionally checks for out fix ∗ ≠ fix and gives up the simulation with event *E5*, if the check succeeds.

**Hybrid op‐sec** 13 **(Check user ID)** The simulator S op‐sec <sup>13</sup> parses(<sup>∗</sup> , *pk*<sup>∗</sup> U ) ≔ out fix ∗ and checks for *pk*<sup>U</sup> ≠ *pk*<sup>∗</sup> U . If the check succeeds, it gives up the simulation with event *E6*.

**Hybrid op‐sec** 14 **(Check balance)** The simulator S op‐sec <sup>14</sup> additionally checks for prev ≠ ∗ 1 and gives up the simulation with event *E7* , if the check succeeds.

**Hybrid op‐sec** 15 **(Check transaction counter)** The simulator S op‐sec <sup>15</sup> additionally checks for ≠ <sup>∗</sup>+1 1 and gives up the simulation with event *E8*, if the check succeeds.

**Hybrid op‐sec** 16 **(Check DS mask)** The simulator S op‐sec <sup>16</sup> additionally checks for <sup>1</sup> ≠ <sup>∗</sup> 1 and gives up the simulation with event *E9*, if the check succeeds.

**Hybrid op‐sec** 17 **(Utilize lookup tables for DetectDS)** This hybrid modifies the code op‐sec 17 for O in the task DetectDS. In the task DetectDS the honest O becomes a dummy party, too, which simply forwards its inputs ds and ′ ds. The code is moved to the simulator and S op‐sec 17 uses its "lookup table" *TRDB* the same way as the Fapc and the final simulator S op‐sec P5C does. More precisely, for legitimately issued, distinct and matching double-spending tags, S op‐sec 17 looks up (*pk*<sup>U</sup> ,*sk*U) ≔ ̄ keys(*pid*<sup>U</sup> ), returns ≔ *sk*<sup>U</sup> and records (*pid*<sup>U</sup> , ) ≔ OK.

**Hybrid op‐sec** 18 **(Utilize lookup tables for VerifyGuilt)** This hybrid modifies the code op‐sec <sup>18</sup> for parties in the task VerifyGuilt. The honest party does not locally run the algorithm itself, but simply forwards its input to the simulator (as the dummy party would do) and S op‐sec 18 queries (*pid*<sup>U</sup> , ) as Fapc would do or proceeds as S op‐sec P5C , if (*pid*<sup>U</sup> , ) has not yet been defined.

**Hybrid op‐sec** 19 **(Utilize lookup tables for BlacklistWallet, forego decryption of blacklisting tags)** The dispute resolver *DR* becomes a dummy party and simply sends it input (blacklist\_wallet, *pid*′ U ) to the simulator S op‐sec <sup>19</sup> in order to signal its consent to blacklist the user. The simulator S op‐sec <sup>19</sup> utilizes the Simulated Transaction Graph *TRDB* as well as bl and runs the code as the ideal functionality Fapc would do eventually. Especially, S op‐sec <sup>19</sup> does not decrypt bl, but uses the recorded −1 bl (bl) from hybrid op‐sec 7 to determine the original .⁶

**Hybrid op‐sec** 20 **(Utilize lookup tables for RecalculateBalance, forego decryption of recalculation tags)** This hybrid utilizes *TRDB* to link legitimately issued recalculation tags to their origin.

When the task RecalculateBalance is invoked, S op‐sec <sup>20</sup> partitions the set of recalculation tags rc into two set genuine rc and fake rc the same way as Fapc would do (cp. Figs. 4.16 and 8.16). Recalculation tags rc ∈ genuine rc are not decrypted, but S op‐sec <sup>20</sup> queries *TRDB* to create a set genuine ≔ {(, )}. Recalculation tags rc ∈ fake rc are still decrypted, their signature is checked for validity and fake ≔ {(, )} is compiled from the decrypted values. Then the balance is calculated as bill <sup>≔</sup> <sup>∑</sup>(,)∈genuine + <sup>∑</sup>(,)∈fake .

This behavior equals the joint behavior of Fapc and the final simulator S op‐sec P5C (cp. Figs. 4.16 and 8.16).

**Hybrid op‐sec** 21 **(Utilize lookup tables for ProveParticipation, check signature)** This hybrid utilizes ̄ pp to assert that prove-participation tags are honestly signed. This sanity check is a preparatory step for the eventual switch from the real code to the ideal code in hybrid op‐sec <sup>23</sup> by ruling out that a corrupted user forges signatures.

More precisely the following changes are applied by op‐sec <sup>21</sup> in the scope of ProveParticipation for an honest violation enforcer interacting with a corrupted user:

The party *VE* becomes a dummy party and simply forwards the input *pid*<sup>P</sup> and set of proveparticipation tags pp to S op‐sec <sup>21</sup> . The simulator interacting with <sup>Z</sup>op‐sec still runs the real code (as a real *VE* would do), but utilizes its map ̄ pp to add the following check. When <sup>Z</sup>op‐sec (playing the corrupted user) sends pp and *pid*<sup>P</sup> is honest, the simulator tries to look up its

⁶ The operator is honest and the real code would abort for a bl that has not legitimately been issued. Hence, −1 bl (bl) is always defined.

original complement ( <sup>∗</sup> pp, *pk*<sup>∗</sup> U ) ≔ ̄ pp(pp). If this does not exist, i.e., if ∗ pp = ⊥, but <sup>Z</sup>op‐sec has provided a valid signature, then the simulator gives up the simulation with event *E10*.

**Hybrid op‐sec** 22 **(Utilize lookup tables for ProveParticipation, check user ID)** Similar to op‐sec <sup>21</sup> this hybrid introduces another sanity check in the scope of ProveParticipation in case of a corrupted user and an honest violation enforcer:

If an original complement ( <sup>∗</sup> pp, *pk*<sup>∗</sup> U ) ≔ ̄ pp(pp) for pp exists but the environment <sup>Z</sup>op‐sec unveils the commitment *pk*<sup>U</sup> = pp to a different *pk*<sup>U</sup> than it has originally been issued, the simulator gives up with event *E11*.

Otherwise it still runs the real code for ProveParticipation.

**Hybrid op‐sec** 23 **(Utilize lookup tables for ProveParticipation, forego unveil of proveparticipation tags)** This hybrid utilizes *TRDB* and ̄ pp to link legitimately issued proveparticipation tags to their origin. More precisely, the following changes are applied by op‐sec 23 in the scope of ProveParticipation:


**Hybrid op‐sec** 24 **(Fake blacklisting tags for honest users)** The code op‐sec <sup>24</sup> for honest users in the scope of IssueWallet is modified such that they do not send bl. Instead, S op‐sec 24 returns bl ≔ (″, bl) with bl ← ENC1.Enc(*pkDR*, (1, … , 1)), when <sup>O</sup> asks for a bl (cp. hybrid op‐sec 7 ).

**Hybrid op‐sec** 25 **(Fake double-spending tags for honest users)** The code op‐sec <sup>25</sup> for honest users in the scope of Deposit and Disburse is modified such that they do not send a real DS response . When the operator asks for double-spending tag (cp. hybrid op‐sec 7 ), the simulator S op‐sec <sup>25</sup> proceeds as follows. S op‐sec <sup>25</sup> compiles the set ds ≔ { ds | (… , , … , ds, … ) ∈ *TRDB*}. (N.b., this already happens for *corrupted* users in hybrid op‐sec 8 to recover their secret key). If no (, ′ , ′ 2 ) ∈ ds has been recorded previously, <sup>S</sup> op‐sec <sup>25</sup> picks <sup>R</sup>← ℤ randomly. Otherwise S op‐sec <sup>25</sup> sets ≔ ′ <sup>+</sup> *sk*U(<sup>2</sup> − ′ 2 ). This equals the behavior of the final simulator S op‐sec P5C .

**Hybrid op‐sec** 26 **(Fake recalculation tags for honest users)** The code op‐sec <sup>26</sup> for honest operator/PoS in the scope of Deposit and Disburse abandons the over-leakage of ′ rc that has provisionally been introduced by hybrid op‐sec 7 . When they ask for rc the simulator does not simply reflect rc ≔ ′ rc, but instead creates rc on its own. The simulator does so in two different ways, depending on the corruption status of the operator.

If the operator is corrupted,⁷ the simulator creates rc ≔ (, , , *pk*rc P , rc) with rc ← SIG.Sign(*sk*rc <sup>P</sup> , (, , 1 ))faithfully and provides a true encryption rc <sup>←</sup> ENC2.Enc(*pk*rc,enc O , rc). We stress that S op‐sec <sup>26</sup> knows all relevant information , , *pid*<sup>P</sup> and due to the leakage introduced by hybrid op‐sec 7 .

If the operator is honest, S op‐sec <sup>26</sup> provides an encryption rc <sup>←</sup> ENC2.Enc(*pk*rc,enc O , rc) for an arbitrary rc from the correct space.

**Hybrid op‐sec** 27 **(Fake prove-participation tags for honest users)** The hybrid op‐sec <sup>27</sup> modifies Deposit and ProveParticipation.

In Deposit the honest users do not leak (pp, pp) anymore. This leakage has provisionally been introduced by hybrid op‐sec 7 . Instead, S op‐sec <sup>27</sup> simulates the commitment as(*pk*<sup>U</sup> , ̄ *pk*<sup>U</sup> ) ← C2.CommitSim(*crs* (2) com) and runs pp <sup>←</sup> SIG.Sign(*sk*pp P , *pk*<sup>U</sup> ). S op‐sec <sup>27</sup> sets pp ≔ *pk*<sup>U</sup> and pp ≔ (*pk*pp P , pp, ̄ *pk*<sup>U</sup> ), returns pp and defines ̄ pp(pp) ≔ (pp, <sup>1</sup> ).

Moreover, the code for ProveParticipation in case of an honest user and a corrupted violation enforcer is adapted (cp. hybrid op‐sec <sup>23</sup> ). After <sup>S</sup> op‐sec <sup>27</sup> has looked up the corresponding pp, <sup>1</sup> ) ≔ ̄ pp(pp), but before sending pp to <sup>Z</sup>op‐sec playing the corrupted *VE*, <sup>S</sup> op‐sec <sup>27</sup> parses (*pk*pp P , pp, ̄ *pk*<sup>U</sup> ) ≔ pp), equivocates the decommitment *pk*<sup>U</sup> ← C2.Equivoke(*crs* (2) com, *pk*<sup>U</sup> , *pk*<sup>U</sup> , ̄ *pk*<sup>U</sup> ), redefines pp ≔ (*pk*pp P , pp, *pk*<sup>U</sup> ) and then sends pp. op‐sec

Again, this equals the behavior of the final simulator S P5C .

For the proof of Theorem 8.2 we show the indistinguishability of subsequent hybrids by a series of hybrids. The hybrids op‐sec 2 to op‐sec 4 , op‐sec 7 and op‐sec 8 are rather trivial and thus Lemma 8.5 handles various hybrids at once.

⁷ N.b., for operator security the operator is always honest, i.e. this case never holds. However, we explicitly consider this case here, as this allows us to reuse this hybrid as hybrid <sup>17</sup> to prove user security.

**Lemma 8.4 (Indistinguishability between op‐sec** 0 **and op‐sec** 1 **)** *Under the assumptions of Theorem 8.2,* op‐sec 0 c ≡ op‐sec 1 *holds.*

Proof This hop solely changes how the *crs* is created during the setup phase. This is indistinguishable for *crs*pok, and *crs* (4) com (see the extractability property of Definition 6.9 and the equivocality property of Definition 6.11, resp., condition (a) each).

**Lemma 8.5 (Indistinguishability between their respective predecessors and op‐sec** 2 **, op‐sec** 3 **, op‐sec** 4 **, op‐sec** 7 **, op‐sec** 8 **, resp.)** *Under the assumptions of Theorem 8.2,* op‐sec 1 c ≡ op‐sec 2 c ≡ op‐sec 3 c ≡ op‐sec 4 *, and* op‐sec 6 c ≡ op‐sec 7 c ≡ op‐sec 8 *hold.*

Proof The hops are all indistinguishable as they do not change anything in the view of Zop‐sec. Please note, that Zop‐sec only sees the in-/output of honest parties and these hops only syntactically change what parts of the code are executed by the parties or by the simulator. With each hop the parties degrade more to a dummy party while at the same time more functionality is put into the simulator.

**Lemma 8.6 (Indistinguishability between op‐sec** 4 **and op‐sec** 5 **)** *Under the assumptions of Theorem 8.2,* op‐sec 4 c ≡ op‐sec 5 *holds.*

Proof This hop is indistinguishable as the equivocated decommitment information is perfectly indistinguishable from a decommitment that has originally been created with the correct message (cp. Definition 6.11, Item (3)).

So far, none of hops between two consecutive hybrids changes anything from the environment's perspective: either the hops are only syntactical or the modification is perfectly indistinguishable. Hence, no reduction argument is required. In the contrary, each of the upcoming security proofs roughly follows the same lines of argument. If the environment Zop‐sec can efficiently distinguish between two consecutive hybrids, then we can construct an efficient adversary B against one of the underlying cryptographic building blocks. To this end, B plays the adversary against a particular security property in the outer game and internally executes the UC-experiment in its head while mimicking the role of the simulator. It is important to note that although B emulates the environment internally, it only has *black-box access* to it. In other words, although everything happens inside "the head of B" it cannot somehow magically extract Zop‐sec's attack strategy.

**Lemma 8.7 (Indistinguishability between op‐sec** 5 **and op‐sec** 6 **)** *Under the assumptions of Theorem 8.2,* op‐sec 5 c ≡ op‐sec 6 *holds.*

Proof First note that the only effective change between op‐sec 5 and op‐sec 6 are the additional checks that abort the simulation with event *E1*, if the extracted witnesses are invalid. Again, the other modifications are purely syntactical. To proof indistinguishability between op‐sec 5 and op‐sec <sup>6</sup> we split this hop into three sub-hybrids. Each sub-hybrid introduces the check for one of the languages (1) *gp* , (2) *gp* and (3) *gp* , resp. In the following only the sub-hybrid for the language (1) *gp* is considered, the indistinguishability of the remaining two is proved analogously. Further note, that the view of Zop‐sec is perfectly indistinguishable, if the simulation does not abort.

Assume there is an environment Zop‐sec that trigger the event *E1* in the first sub-hybrid with non-negligible advantage. This immediately yields an efficient adversary B against the extraction property of the NIZK scheme. Internally, B runs Zop‐sec in its head plays the role of the simulator and all honest parties. Externally, B plays the adversary in Definition 6.9, Item (3b). If the event *E1* occurs internally, B outputs the corresponding pair (*stmnt*, ). In the second and third sub-hybrid B internally extracts the witness for the previous sub-hybrid using the extraction trapdoor *td*epok which B obtains as part of its input.

**Remark 8.8** *We observe that Lemma 8.7 implies that the equations*

$$\text{C1.Open}(\text{crs}\_{\text{com}}^{\text{(1)}}, m, \text{c}\_{\text{fix}}, d\_{\text{fix}}) = 1 \qquad \text{with} \qquad m = (\Lambda, pk\_{\text{ll}}) \tag{8.9}$$

$$\text{C1.Open}(\text{crs}\_{\text{com}}^{\text{(1)}}, m, \text{c}\_{\text{upd}}, d\_{\text{upd}}) = 1 \qquad \text{with} \qquad m = \text{(A, 1, U}\_1^{\text{next}}, \text{g}\_1) \tag{8.10}$$

$$\text{C1.Open}(\text{crs}\_{\text{com}}^{\text{(1)}}, m, \text{c}\_{\text{upd}}^{\text{prev}}, d\_{\text{upd}}^{\text{prev}}) = 1 \qquad \text{with} \qquad m = (\Lambda, B^{\text{prev}}, U\_1, X) \tag{8.n}$$

$$\text{C1.Open}(\text{crs}\_{\text{com}}^{\{1\}}, m, c\_{\text{upd}}', d\_{\text{upd}}') = 1 \qquad \text{with} \qquad m = (\Lambda, B^{\text{prev}}, U\_1^{\text{next}}, X) \tag{8.12}$$

$$\text{C3.Open}(crs\_{\text{com}}^{\{3\}}, \Lambda', c\_{\text{wid}}', d\_{\text{wid}}') = 1 \tag{8.13}$$

$$\text{C2.Open}(\text{crs}\_{\text{com}}^{\langle 2 \rangle}, pk\_{\text{ll}}, c\_{pk\_{\text{ll}}}, d\_{pk\_{\text{ll}}}) = 1 \tag{8.14}$$

*and*

$$\text{SIC.Vf}(\text{pk}\_{\mathcal{O}}^{\text{fix}}, \sigma\_{\text{fix}}, m) = 1 \qquad \text{with} \qquad m = (\text{c}\_{\text{fix}}, a\_{\mathcal{U}}) \tag{8.15}$$

$$\text{SIC.Vfy}(pk\_{\text{ $p$ }}^{\text{upd.prev}}, \sigma\_{\text{upd}}^{\text{prev}}, m) = 1 \qquad \text{with} \qquad m = (c\_{\text{upd}}^{\text{prev}}, s^{\text{prev}}) \tag{8.16}$$

$$\text{SIG.Vf}(\text{pk}\_O^{\text{cert}}, \sigma\_{\text{p}}^{\text{cert}, \text{prev}}, m) = 1 \qquad \text{with} \qquad m = (\text{pk}\_{\text{p}}^{\text{prev}}, \text{a}\_{\text{p}}^{\text{prev}}) \tag{8.17}$$

(8.18)

*resp., hold and that all variables can efficiently be extracted. Remember, that gp acts as the identity function on group elements. Likewise, the equation*

$$T = \mathfrak{p}k\_{\mathfrak{q}I}^{\mu\_2} \cdot U\_1 \tag{8.1} \tag{8.10} \\ T = \mathfrak{g}\_1^t \tag{8.10}$$

*holds. Note, that the* ℤ *-elements and* <sup>2</sup> *cannot be extracted, but are known and part of the statement. Moreover, given the extracted chunks of the wallet ID* ′ 0 , … , ′ ℓ−1 *the unique wallet ID can be reconstructed. The projection gp becomes injective if the pre-image is restricted to* ℤ *and the inverse, i.e. DLOG, can be efficiently computed as* ′ 0 , … , ′ ℓ−1 *are sufficiently "small".*

Up to this point, we already know that op‐sec 0 c ≡ op‐sec 8 holds. Except for two small changes (from op‐sec 4 to op‐sec 5 and from op‐sec 5 to op‐sec 6 ) all hops are only syntactical.

The remaining subsequent hybrids can roughly be divided into two groups. The hybrids from op‐sec 9 to op‐sec <sup>16</sup> cover modifications that affect corrupted users while op‐sec <sup>17</sup> to op‐sec 27 cover modifications that affect honest users.

The hybrids we deal with first, i.e., hybrids op‐sec 9 to op‐sec <sup>16</sup> , only add more sanity checks but do not change any messages. However, only *TRDB* and these sanity checks enable a reduction to cryptographic assumptions and thus are vital to prove operator security. Intuitively, these sanity checks assert that a malicious user cannot make the simulated transaction database and the ideal transaction database fell apart without immediately being noticed or the malicious user has successfully broken a cryptographic assumption. To this end, two additional lemmas about the structure of *TRDB* are necessary. These lemmas are in the same spirit as Lemmas 5.2 and 5.3. Intuitively, the commitments fix, upd induce a graph structure onto *TRDB* comparable to the wallet ID and serial number .

#### **Lemma 8.9 (Forest Structure of the Simulated Transaction Graph)**

	- (2) As the serial number of the new node is randomly chosen, no existing node can point to the new node as its predecessor and thus no cycle is closed with overwhelming probability.

**Lemma 8.10 (Indistinguishability between op‐sec** 8 **and op‐sec** 9 **)** *Under the assumptions of Theorem 8.2,* op‐sec 8 c ≡ op‐sec 9 *holds.*

Proof Assume there is an environment Zop‐sec that trigger the event *E2* with non-negligible advantage. This immediately yields an efficient adversary B against the EUF-CMA security of SIG. We only need to deal with the case that ∗ does not exist. If it exists, Lemma 8.9, Item (1) implies its uniqueness. We need to distinguish two cases. On an abstract level these cases correspond to the following scenarios: Either the previous PoS exists. Then the signature prev upd on (prev upd , prev) is a forgery. Or alternatively, the allegedly previous PoS does not exits but has been imagined by the user. Then (prev upd , prev) may have an honest, valid signature (because the user feigned the PoS), but the certificate *cert*prev P for the fake PoS constitutes a forgery. Please note, that the simulator always records an entry *trdb* when it legitimately issues a signature upd and vice versa.


⁸ N.b.: PoS may also denote the operator, if the transaction at hand happens to be the first after a IssueWallet and thus ∗ has been signed by the operator playing the role an PoS. For brevity, we only consider PoSes here.

with respect to *pk*cert O as otherwise a mapping *pid*prev <sup>P</sup> ↦ (*pk*prev P ,*sk*prev P ,*cert*P) would have been recorded.

The forgeries are indeed valid due to Remark 8.8.

**Remark 8.11** *Without Lemma 8.10 it is unclear in Lemma 8.9, Item (2) if the denoted predecessor of edge* (prev, ) *actually exists. The simulator extracts the serial number* prev *of the predecessor from the proof and puts this serial number into the newly added trdb. With this in mind Lemma 8.9, Item (2) would have to be interpreted such that an edge* (prev, ) *is ignored, if the predecessor did not exist. Nonetheless, TRDB is still a forest and Lemma 8.9, Item (2) remains correct. Anyway, this oddity is ruled out by Lemma 8.10.*

**Lemma 8.12 (Indistinguishability between op‐sec** 9 **and op‐sec** 10 **)** *Under the assumptions of Theorem 8.2,* op‐sec 9 c ≡ op‐sec <sup>10</sup> *holds.*

Proof Assume there is an environment Zop‐sec that trigger the event *E3* with non-negligible advantage. This immediately yields an efficient adversary B against the EUF-CMA security of SIG by the same argument as in the proof of Lemma 8.10 as (prev upd , prev) are jointly signed by the same signature upd.

**Lemma 8.13 (Indistinguishability between op‐sec** 10 **and op‐sec** 11 **)** *Under the assumptions of Theorem 8.2,* op‐sec 10 c ≡ op‐sec <sup>11</sup> *holds.*

Proof Assume there is an environment Zop‐sec that trigger the event *E4* with non-negligible advantage. We construct an efficient adversary B against the binding property of C1. Internally, B runs Zop‐sec in its head and plays the role of the simulator and all honest parties. Externally, B plays the role of the adversary as defined by Definition 6.11, Item (2). When the event (*E3*) occurs, B sets

$$m\_{\rm upd}^{\rm prev} := (\Lambda, B^{\rm prev}, U\_1, X) \tag{8.20}$$

from the extracted witness and obtains

$$m\_{\rm upd}^{\rm out} = (\Lambda^\*, B^\*, U\_1^\*, X^\*) \tag{8.21}$$

from *TRDB*. B outputs (out upd ∗ , prev upd , prev upd , out upd ∗ , out upd ∗ ) to the external game. By assumption ≠ <sup>∗</sup> holds and Remark 8.8 asserts that both openings are valid.

**Lemma 8.14 (Tree-wise Uniqueness of the Wallet Identifier)** *The wallet ID maps oneto-one and onto a connected component (i.e., tree) of the Simulated Transaction Graph.*

Proof Same argument as in the proof of Lemma 5.3.

**Lemma 8.15 (Indistinguishability between op‐sec** 11 **and op‐sec** 12 **)** *Under the assumptions of Theorem 8.2,* op‐sec 11 c ≡ op‐sec <sup>12</sup> *holds.*

Proof We introduce a sub-hybrid that splits between two cases why event *E5* is triggered: (1) out fix ∗ ≠ fix and fix is not recorded in any *trdb* ∈ *TRDB*. (2) out fix ∗ ≠ fix and fix is recorded in some record *trdb*‡ ∈ *TRDB*. An environment Zop‐sec that can differentiate between op‐sec 11 and the sub-hybrid yields an efficient adversary B against the EUF-CMA security of SIG. An environment Zop‐sec that can differentiate between the sub-hybrid and op‐sec <sup>12</sup> yields an efficient adversary B against the binding property of C1.


$$m\_{\rm fix} \coloneqq (\Lambda, pk\_{\rm \mathcal{U}}) \tag{8.22}$$

from the extracted witness and obtains

$$m\_{\rm fix}^{\rm out} \stackrel{\pm}{=} (\Lambda^{\ddagger}, \mathcal{p}k\_{\mathcal{U}}^{\ddagger}) \tag{8.23}$$

from *TRDB*. <sup>B</sup> outputs (fix, fix, fix, out fix ‡ , out fix ‡ ) to the external game.

Remark 8.8 asserts that the forgery in (1) and both openings in (2) are indeed valid.

**Lemma 8.16 (Indistinguishability between op‐sec** 12 **, op‐sec** 13 **, op‐sec** 14 **, op‐sec** 15 **and op‐sec** 16 **)** *Under the assumptions of Theorem 8.2,* op‐sec 12 c ≡ op‐sec 13 c ≡ op‐sec 14 c ≡ op‐sec 15 c ≡ op‐sec <sup>16</sup> *holds.*

Proof If an environment Zop‐sec can distinguish between any of the hops from op‐sec <sup>12</sup> to op‐sec <sup>16</sup> this yields an efficient adversary against the binding property of C1. As usual, <sup>B</sup> runs Zop‐sec in its head and internally plays the role of the simulator and all honest parties. Externally, B plays the role of the adversary as defined by Definition 6.11, Item (2). If event (*E7* ) or (*E8*) occurs, B sets

$$m\_{\rm upd}^{\rm prev} = (\Lambda, B^{\rm prev}, U\_1, \mathfrak{g}\_1 X) \tag{8.24}$$

from the extracted witness and obtains

$$m\_{\text{upd}}^{\text{out}^\*} := (\Lambda^\*, B^\*, U\_1^\*, X^\*) \tag{8.25}$$

from *TRDB*. B outputs (upd, prev upd , prev upd , out upd ∗ , out upd ∗ ) to the external game. If the event (*E6*) is triggered, B proceeds analogous but for the fixed part of wallet fix.

Again, we interrupt the line of argument to summarize what we have so far. We know that op‐sec 0 c ≡ op‐sec <sup>16</sup> holds. From a high-level perspective, most of the previous hybrids ensured that corrupted users cannot fool the operator (or PoSes) within tasks that expand the transaction database, i.e. essentially within the main tasks IssueWallet, Deposit and Disburse.

The remaining hybrids from op‐sec <sup>17</sup> to op‐sec <sup>27</sup> largely considers modifications to the utility tasks with honest users being of special concern. The final simulator S op‐sec needs to provide various tags (ds, bl, rc and pp) to Fapc that are output by the main tasks and later re-used in the utility tasks. Until now, i.e. up to simulator S op‐sec <sup>16</sup> , real messages sent by users have been used to compile and record real tags (cp. hybrid op‐sec 7 ). These have been played back when necessary. While little needs to be changed for corrupted users, the final simulator S op‐sec must provide these tags for honest users without having access to any messages. op‐sec <sup>17</sup> to op‐sec <sup>27</sup> introduce the necessary modifications.

**Lemma 8.17 (Indistinguishability between op‐sec** 16 **and op‐sec** 17 **)** *Under the assumptions of Theorem 8.2,* op‐sec 16 c ≡ op‐sec <sup>17</sup> *holds.*

Proof We need to distinguish the same cases as in S op‐sec .

If <sup>Z</sup>op‐sec calls DetectDS with two double-spending tags ds = (, , <sup>2</sup> ), ′ ds = (′ , ′ , ′ 2 ) that do not stem from the system, do not match or are otherwise unusable, the hop is perfectly indistinguishable, because S op‐sec <sup>17</sup> simply runs the same algorithm as the honest operator in the real game. At the bottom line, both calculate *sk*<sup>U</sup> ≔ ( − ′ )/(<sup>2</sup> − ′ 2 ) mod and return the result. We stress that in both experiments—the real protocol and the ideal functionality—there is no guarantee that the returned *sk*<sup>U</sup> is even a valid secret key. This follows the garbage-ingarbage-out principle.

We now consider genuine double-spending tags that have been output by the system before and match each other, i.e., they are distinct and have a common fraud-detection ID . In this case S op‐sec <sup>17</sup> does not recover *sk*<sup>U</sup> from the double-spending tags by calculation, but looks up the secret key *sk*<sup>∗</sup> <sup>U</sup> that has been recorded in ̄ keys for *pid*<sup>U</sup> and returns ≔ *sk*<sup>∗</sup> <sup>U</sup> as a proof of guilt. The only way how Zop‐sec could possibly distinguish between op‐sec <sup>16</sup> and op‐sec <sup>17</sup> is that *sk*<sup>U</sup> <sup>≠</sup> *sk*<sup>∗</sup> <sup>U</sup> holds which entails *pk*<sup>U</sup> ≠ *sk*<sup>U</sup> 1 or *pk*<sup>U</sup> ≠ *sk*<sup>∗</sup> U 1 . Intuitively, this attack means the environment has been able to let a user commit double-spending such that the generated double-spending tags do not allow to calculate a valid proof of guilt.

If the user is honest, the user's key has been generated by S op‐sec <sup>17</sup> and recorded in ̄ keys ab initio. In particular, *pk*<sup>U</sup> = *sk*<sup>∗</sup> U 1 holds and the honest user always correctly answers the double-spending challenge. Simple math shows that *sk*<sup>U</sup> ≔ ( − ′ )/(<sup>2</sup> − ′ 2 ) = ((<sup>2</sup> *sk*∗ U + 1 ) − (′ 2 *sk*∗ <sup>U</sup> + <sup>1</sup> ))/(<sup>2</sup> − ′ 2 ) = *sk*<sup>∗</sup> <sup>U</sup> follows.

If the user is corrupted, the user's secret key is generated by the environment. In this case, *sk*∗ <sup>U</sup> is recovered by S op‐sec <sup>17</sup> in the scope of Deposit or Disburse and recorded in ̄ keys due to the change in hybrid op‐sec 8 . S op‐sec 8 uses the same equation during Deposit/Disburse to recover *sk*<sup>∗</sup> <sup>U</sup> as the honest operator uses in DetectDS to recover *sk*<sup>U</sup> in the real game. In other words, the recovery of the secret key is only brought forward from a belated double-spending detection in the real experiment to the point of time when the double-spending actually occurs in the simulated experiment. As the same equation is used, *sk*<sup>∗</sup> <sup>U</sup> = *sk*<sup>U</sup> follows trivially, if the recovery has succeeded.

It remains to show, that *sk*<sup>∗</sup> <sup>U</sup> is always successfully recovered, i.e. that the test *pk*<sup>U</sup> = *sk*<sup>∗</sup> U 1 (cp. Step 5 in Fig. 8.9 and Step 4 in Fig. 8.13) succeeds. In short, this holds due to the soundness of the NIZK and the binding property of C1. Otherwise hybrid op‐sec 6 or hybrid op‐sec <sup>16</sup> would already have aborted with event *E1* or *E9*, resp. More precisely, using Remark 8.8 we conclude that the two equations = *pk*<sup>2</sup> U ⋅ <sup>1</sup> and ′ = *pk* ′ 2 U ⋅ <sup>1</sup> with extracted *pk*<sup>U</sup> , , ′ and <sup>1</sup> hold. Moreover, = 1 , ′ = ′ 1 hold and , ′ as well as <sup>2</sup> , ′ 2 are known as part of the statement. By equating we obtain ⋅ *pk*−<sup>2</sup> U = ′ <sup>⋅</sup> *pk*−′ 2 <sup>U</sup> which yields *pk*<sup>U</sup> = ( ′−1) 1/(2−′ 2 ) = (−′ )/(2−′ 2 ) 1 = *sk*<sup>∗</sup> U 1 .

**Lemma 8.18 (Indistinguishability between op‐sec** 17 **and op‐sec** 18 **)** *Under the assumptions of Theorem 8.2,* op‐sec 17 c ≡ op‐sec <sup>18</sup> *holds.*

Proof VerifyGuilt is a local algorithm and does not send any messages. Hence, Zop‐sec can distinguish between op‐sec <sup>17</sup> and op‐sec <sup>18</sup> , if and only if VerifyGuilt returns a different result bit for the same input.

First note, that S op‐sec <sup>18</sup> still falls back to the real algorithm, if <sup>Z</sup>op‐sec calls VerifyGuilt with an input (*pid*<sup>U</sup> , ) for a corrupted user and a which is made-up by Zop‐sec, i.e., if the internal map of simulator S op‐sec <sup>18</sup> is undefined (cp. Step 2 in Fig. 4.14). In this case op‐sec <sup>18</sup> is perfectly indistinguishable from op‐sec <sup>17</sup> . In other words, <sup>Z</sup>op‐sec has to call VerifyGuilt for an honest user *pid*<sup>U</sup> or for a genuine proof of guilt , in order to trigger a distinguishing result bit, if at all.

Also note, that the real code and the ideal functionality are both deterministic. W.l.o.g. it therefore suffices to consider first-time invocations of VerifyGuilt for a particular input (*pid*<sup>U</sup> , ). Under this restriction (*pid*<sup>U</sup> , ) is only defined, if it has been set in the scope of Deposit or Disburse (cp. Step 5 in Fig. 8.9 and Step 4 in Fig. 8.13) or in the scope of DetectDS (cp. Fig. 4.13). The necessary modifications have been introduced by hybrids op‐sec 8 and op‐sec <sup>17</sup> , resp. However, we can ignore that (*pid*<sup>U</sup> , ) is solely defined, because VerifyGuilt is invoked a second time (cp. Step 4 in Fig. 4.14).

We discuss both cases for a different outcome separately.


In summary, VerifyGuilt(*pid*<sup>U</sup> , ) returns OK in op‐sec <sup>17</sup> but NOK in op‐sec <sup>18</sup> if and only if there is an environment <sup>Z</sup>op‐sec that comes up with a correct proof of guilt = *sk*<sup>U</sup> for a *honest* user without letting this user commit double-spending. This immediately yields an efficient adversary B against the DLOG assumption.

Externally, <sup>B</sup> gets a group element ∈ <sup>1</sup> as its input. Internally, <sup>B</sup> runs <sup>Z</sup>op‐sec in its head and plays the role of the simulator and all honest parties. B guesses the honest user for that Zop‐sec eventually calls the distinguishing VerifyGuilt. When B has to internally provide *pk*<sup>U</sup> in the scope of RegisterUser, it uses *pk*<sup>U</sup> = . Note, that B does not need to know *sk*<sup>U</sup> for a successful simulation. As the user is honest, all PoS and operator are honest, too.⁹ Hence, for this particular user no messages need to be simulated. When <sup>Z</sup>op‐sec calls VerifyGuilt with a correct ≔ *sk*U, <sup>B</sup> outputs *sk*<sup>U</sup> as the DLOG.

**Lemma 8.19 (Indistinguishability between op‐sec** 18 **and op‐sec** 19 **)** *Under the assumptions of Theorem 8.2,* op‐sec 18 c ≡ op‐sec <sup>19</sup> *holds.*

Proof This hop is perfectly indistinguishable from the environment's perspective as the modifications made by hybrid op‐sec <sup>19</sup> do not change the output. Note that operator and dispute resolver are both honest. Due to the correctness of ENC1 the ciphertext bl determines a unique message (for a fix key pair *pkDR*, *skDR*). Hence, the originally recorded wallet ID ≔ −1 bl (bl) equals the one that bl decrypts to.

**Lemma 8.20 (Indistinguishability between op‐sec** 19 **and op‐sec** 20 **)** *Under the assumptions of Theorem 8.2,* op‐sec 19 c ≡ op‐sec <sup>20</sup> *holds.*

Proof The task RecalculateBalance is an algorithm that locally executed by the operator and the operator is honest. The hop is perfectly indistinguishable from the environment's perspective as the modifications made by hybrid op‐sec <sup>20</sup> do not change the output using the same argument as for the previous hop. The set of recalculation tags rc = genuine rc ⊎ fake rc is partitioned into two disjoint subsets. Due to the correctness of ENC2 looking up the original recorded cleartext genuine rc for a genuine rc ∈ genuine rc yields the same result as actual decryption. The treatment of fake rc is not changed at all.

**Lemma 8.21 (Indistinguishability between op‐sec** 20 **and op‐sec** 21 **)** *Under the assumptions of Theorem 8.2,* op‐sec 20 c ≡ op‐sec <sup>21</sup> *holds.*

Proof Assume there is an environment Zop‐sec that triggers event *E10* with non-negligible probability. This immediately yields an efficient adversary B against the EUF-CMA security of SIG. Internally, B runs Zop‐sec in its head and plays the role of the simulator and all honest parties. Externally, B plays the EUF-CMA security game with a challenger C and a signing

⁹ This is a consequence of the considered corruption model. In the case of operator security, corrupted PoSes are only admissible, if all users are corrupted.

oracle SIG *pk*pp P ,*sk*pp P . B needs to guess the honest PoS with *pid*<sup>P</sup> for which the environment Zop‐sec eventually forges a signature while it plays a corrupted user who tries to prove its participation in a transaction with this particular PoS towards an honest violation enforcer. When the PoS with *pid*<sup>P</sup> registers itself in the scope of RegisterPOS and B playing S op‐sec <sup>21</sup> needs to provide *pk*<sup>P</sup> = (*pk*upd P , *pk*rc P , *pk*pp P ) it embeds the external challenge public key as *pk*pp P . Whenever B playing the role of S op‐sec <sup>21</sup> needs to issue a signature with respect to *pk*pp P , it uses its external EUF-CMA oracle SIG *pk*pp P ,*sk*pp P . When the event *E10* occurs, <sup>B</sup> extracts *pk*<sup>U</sup> from pp and pp from pp and outputs the forgery. N.b., the *pk*<sup>U</sup> has never been signed with respect to *pk*pp P by assumption as otherwise ̄ pp would have been defined for the pair pp, pp and the event *E10* would not have been triggered.

**Lemma 8.22 (Indistinguishability between op‐sec** 21 **and op‐sec** 22 **)** *Under the assumptions of Theorem 8.2,* op‐sec 21 c ≡ op‐sec <sup>22</sup> *holds.*

Proof Assume there is an environment Zop‐sec that triggers event *E11* with non-negligible probability. This immediately yields an efficient adversary B against the binding property of C2. Internally, B runs Zop‐sec in its head and plays the role of the simulator and all honest parties. When the event *E11* occurs, <sup>B</sup> extracts *pk*<sup>U</sup> from pp, gathers the current public key *pk*<sup>U</sup> of the user it currently interacts with and the provided decommitment *pk*<sup>U</sup> , looks up the original public key *pk*<sup>∗</sup> U and original decommitment ∗ *pk*<sup>U</sup> that have been recorded by pp and outputs (*pk*<sup>U</sup> , *pk*<sup>U</sup> , *pk*<sup>U</sup> ) and (*pk*<sup>U</sup> , <sup>∗</sup> *pk*<sup>U</sup> , *pk*<sup>∗</sup> U ). Note, *pk*<sup>U</sup> ≠ *pk*<sup>∗</sup> U holds and both openings are valid by assumption.

**Lemma 8.23 (Indistinguishability between op‐sec** 22 **and op‐sec** 23 **)** *Under the assumptions of Theorem 8.2,* op‐sec 22 c ≡ op‐sec <sup>23</sup> *holds.*

Proof At the bottom line this hop only changes what part of code is executed by which entity, i.e. the hop is perfectly indistinguishable from the perspective of Zop‐sec .

This is obvious in the case of an honest user and a corrupted violation enforcer. Honest users always send the true decommitment information that originally belongs to their proveparticipation tag and they only do so for prove-participation tags that are their own ones. This is exactly what the simulator does on behalf of the dummy user.

In case of a corrupted user and an honest violation enforcer the only way for Zop‐sec to distinguish between op‐sec <sup>22</sup> and op‐sec <sup>23</sup> is to make the honest violation enforcer output a different result bit. In summary, this is impossible due to the sanity checks that have been introduced in op‐sec <sup>21</sup> and op‐sec <sup>22</sup> . The detailed argument considers the branches of the program flow individually. If the signature is invalid, i.e. SIG.Vfy(*pk*pp P , pp, *pk*<sup>U</sup> ) = 0 holds, or the decommitment fails, i.e. C2.Open(*crs* (2) com, *pk*<sup>U</sup> , *pk*<sup>U</sup> , *pk*<sup>U</sup> ) = 0 holds, the simulator calls the ideal functionality with input pp = ⊥ (cp. Step 3f in Fig. 8.17) and the ideal functionality always outputs *result* = NOK to *VE*. The real code also returns *result* = NOK under the same condition. We now consider the case that the simulator runs the ideal code with an input pp ≠ ⊥. Step 3i in Fig. 8.17 is only reached, if the conditions of Steps 3f to 3h failed all. Formally, this means

$$\begin{aligned} \neg \left( \begin{array}{c} \{\psi\_{\text{PP}} = \bot \lor \forall \text{f} \forall \text{p} (pk\_{\text{p}}^{\text{pp}}, c\_{\text{pp}}, c\_{pk\_{\text{q}}}) = 0 \lor \text{Open}(crs\_{\text{com}}^{\text{(2)}}, pk\_{\text{q}}, c\_{pk\_{\text{q}}}, d\_{pk\_{\text{q}}}) = 0 \right) \\ \lor \left( \psi\_{\text{PP}}^{\*} = \bot \land \text{pid}\_{\text{p}} \notin \textsf{PTO}\_{\text{corr}} \right) \\ \lor \left( \text{Open}(crs\_{\text{com}}^{\text{(2)}}, pk\_{\text{q}}, c\_{pk\_{\text{q}}}, d\_{pk\_{\text{q}}}) = 1 \land pk\_{\text{q}} \neq pk\_{\text{q}}^{\*} \right) \end{aligned} \tag{8.27}$$

holds. After simplification (note that some parts cancel out due to inverse conditions on Open)

$$\begin{split} \psi\_{\rm pp} & \neq \bot \wedge \forall \mathfrak{f} (\mathfrak{p} k\_{\mathfrak{p}}^{\rm pp}, \sigma\_{\rm pp}, c\_{pk\_{\mathfrak{q}}}) = 1 \wedge \textsf{Open}(\mathit{crs}\_{\rm com}^{(2)}, \mathfrak{p} k\_{\mathfrak{q}\mathfrak{l}}, c\_{pk\_{\mathfrak{q}\mathfrak{l}}}, d\_{pk\_{\mathfrak{q}\mathfrak{l}}}) = 1 \\ \wedge \textit{p} k\_{\mathfrak{q}\mathfrak{l}} &= \textit{p} k\_{\mathfrak{q}\mathfrak{l}}^{\*} \wedge (\psi\_{\rm pp}^{\*} \neq \bot \vee \textit{p} id\_{\mathfrak{p}} \in \mathcal{P} \mathcal{D}\_{\rm corr}) \end{split} \tag{8.29}$$

remains. We further note, that *pk*<sup>U</sup> = *pk*<sup>∗</sup> U implies ∗ pp ≠ ⊥, or inversely stated, if ∗ pp was invalid, then *pk*<sup>∗</sup> <sup>U</sup> would be undefined, too. Hence, the last predicate inside the or-bracket is irrelevant and can be dropped. Also, we exploit that *pk*<sup>U</sup> = *pk*<sup>∗</sup> U can equivalently be substituted by *pid*<sup>U</sup> = *pid*<sup>∗</sup> U and we finally obtain

$$\begin{split} \psi\_{\rm pp} & \neq \bot \wedge \forall \mathfrak{f} (\not p k\_{\mathfrak{p}}^{\rm pp}, \sigma\_{\rm pp}, c\_{\mathfrak{p}k\_{\mathfrak{q}l}}) = 1 \wedge \textsf{Open}(\mathit{crs}\_{\rm com}^{\left(2\right)}, \not p k\_{\mathfrak{q}l}, c\_{\mathfrak{p}k\_{\mathfrak{q}l}}, d\_{\mathfrak{p}k\_{\mathfrak{q}l}}) = 1 \\ \wedge \psi\_{\rm pp}^{\*} & \neq \bot \wedge \textit{pid}\_{\mathfrak{q}l} = \textit{pid}\_{\mathfrak{q}l}^{\*} \end{split} \tag{8.31}$$

The first line of this expression is exactly the condition under that the real code returns *result* = OK, the last line is the condition under that the ideal functionality returns OK.

**Lemma 8.24 (Indistinguishability between op‐sec** 23 **and op‐sec** 24 **)** *Under the assumptions of Theorem 8.2,* op‐sec 23 c ≡ op‐sec <sup>24</sup> *holds.*

Proof In this hop all encryptions bl of wallet IDs are replaced by encryptions of a 1-vector for all honest users. This does not change the output of an honest operator, as op‐sec <sup>19</sup> has eliminated their decryption.

We further split this hop into a sequence of sub-hybrids, with each sub-hybrid replacing a single encryption in reverse order of appearance. Assume Zop‐sec can distinguish between op‐sec <sup>23</sup> and op‐sec <sup>24</sup> with non-negligible advantage. This yields an efficient adversary B against the IND-CCA security of the encryption scheme ENC1. Internally, B runs Zop‐sec and plays the role of all parties and the simulator for Zop‐sec. Externally, B plays the IND-CCA security game with a challenger C and a decryption oracle ENC1 *pkDR*,*skDR* . When B—playing the role of

the simulator—needs to provide the public key in the scope of RegisterDR, it embeds the challenge key *pkDR*. B needs to guess the index of the sub-hybrid that causes a non-negligible difference, i.e., B needs to guess which (user) wallet causes distinguishability. For the first ( − 1) invocations of IssueWallet, B encrypts the true seed, in the th invocation B embeds the external challenge and B encrypts a 1-vector for the remaining invocations of IssueWallet. If Zop‐sec invokes the task BlacklistWallet between O and *VE* and B needs to restore the wallet ID, the following two cases may occur: (1) The presented blacklisting tag bl is genuine. In this case B uses the lookup table −1 bl that has been edified in op‐sec 7 to recover the original wallet ID and thereby the correct set of fraud-detection IDs (cp. hybrid op‐sec <sup>19</sup> ). (2) The presented blacklisting tag bl is a fake. In this case B uses its decryption oracle ENC1 *pkDR*,*skDR* to restore the wallet ID and to create a set of fraud-detection IDs. B outputs whatever Zop‐sec outputs.

**Lemma 8.25 (Indistinguishability between op‐sec** 24 **and op‐sec** 25 **)** *Under the assumptions of Theorem 8.2,* op‐sec 24 c ≡ op‐sec <sup>25</sup> *holds.*

Proof This hop is perfectly indistinguishable. As long as no double-spending occurs, the user chooses a fresh <sup>1</sup> in every transaction and thus a single point (<sup>2</sup> , ) is information-theoretically independent from *sk*U.

**Lemma 8.26 (Indistinguishability between op‐sec** 25 **and op‐sec** 26 **)** *Under the assumptions of Theorem 8.2,* op‐sec 25 c ≡ op‐sec <sup>26</sup> *holds.*

Proof This hop only replaces the encryption of recalculation tags, if the operator is honest. If the operator is corrupted, the hop only changes what part of the code are executed by which entity and the hop is perfectly indistinguishable.

If there is an environment Zop‐sec that can efficiently distinguish with non-negligible advantage this yields an adversary B against the IND-CCA security of ENC2. The proof is analogous to the proof of Lemma 8.24, but for the operator instead of the violation enforcer and blacklisting tags replaced by recalculation tags.

Externally, B plays the IND-CCA security game with a challenger C and a decryption oracle ENC2 *pk*rc,enc O ,*sk*rc,enc O . When B—playing the role of the simulator—needs to provide the public key *pk*<sup>O</sup> = (*pk*cert O , *pk*upd O , *pk*fix O , *pk*rc,sig O , *pk*rc,enc O ) in the scope of RegisterOp, it embeds the challenge key *pk*rc,enc O . B needs to guess the index of the sub-hybrid that causes a non-negligible difference, i.e., B needs to guess which recalculation tag causes distinguishability. For the first ( − 1) invocations of Deposit and Disburse, B encrypts a true recalculation tag, in the th invocation B embeds the external challenge and B encrypts an arbitrary, but fixed value for the remaining invocations of Deposit and Disburse. If Zop‐sec invokes the task RecalculateBalance for O, the following two cases may occur: (1) The presented recalculation tag bl is genuine. In this case

B uses the simulated transaction database *TRDB* that has been edified in op‐sec 7 to recover the original values (, , *pid*<sup>P</sup> , ) (cp. hybrid op‐sec <sup>20</sup> ). (2) The presented recalculation tag bl is a fake. In this case B uses its decryption oracle ENC2 *pk*rc,enc O ,*sk*rc,enc O to restore the necessary values. B outputs whatever Zop‐sec outputs.

**Lemma 8.27 (Indistinguishability between op‐sec** 26 **and op‐sec** 27 **)** *Under the assumptions of Theorem 8.2,* op‐sec 26 c ≡ op‐sec <sup>27</sup> *holds.*

Proof In this hop the simulator S op‐sec <sup>27</sup> sends simulated commitments *pk*<sup>U</sup> as prove-participation tags pp for honest user. In case the violation enforcer is corrupted, simulator S op‐sec <sup>27</sup> equivocates these commitments to the correct *pk*<sup>U</sup> on demand, when Zop‐sec calls ProveParticipation for an honest user and an affected pp. If <sup>Z</sup>op‐sec has a non-negligible advantage to distinguish between op‐sec <sup>26</sup> and op‐sec <sup>27</sup> , then an efficient adversary <sup>B</sup> can be constructed against the hiding property and equivocality of C2. Again, this proof proceeds in a series of sub-hybrids with each sub-hybrid replacing a single *pk*<sup>U</sup> by a simulated commitment.

Taking all the aforementioned statements together, Theorem 8.2 from the beginning of this section follows. For the sake of formal completeness, we recall it again.

**Theorem 8.2 (Operator Security)** *Under the assumptions of Theorem 8.1*

$$
\pi\_{\text{PSC}}^{\mathcal{F}\_{\text{CES}}, \mathcal{F}\_{\text{bb}}, \mathcal{F}\_{\text{msg}}} \geq\_{\text{UC}} \mathcal{F}\_{\text{apc}} \tag{8.3}
$$

*holds under static corruption of*


Proof A direct consequence of Lemmas 8.4 to 8.27.

## **8.4 Proof of User Security and Privacy**

In this section we show the remaining half of Theorem 8.1 by proofing the the following theorem.

**Theorem 8.28 (User Security and Privacy)** *Under the assumptions of Theorem 8.1*

$$
\pi\_{\text{PSC}}^{\mathcal{F}\_{\text{CR}}, \mathcal{F}\_{\text{bb}}, \mathcal{F}\_{\text{ms}}} \geq\_{\text{UC}} \mathcal{F}\_{\text{apc}} \tag{8.32}
$$

*holds under static corruption of*

Figure 8.19: The Simulator for User Security and Privacy

#### *(1) a subset of PoSes, operator and violation enforcer, or*

*(2) all PoSes, operator and violation enforcer as well as a subset of users.*

The definition of the UC-simulator S user‐sec P5C for Theorem 8.28 can be found in Figs. 8.19 to 8.34. Please note that while the real protocol P5C lives in the (FCRS,Fbb,Fmsg)-model, the ideal functionality Fapc has no CRS. The CRS is simulated, providing S user‐sec P5C with a lever to simulate the ZK proofs P1, P2, and P3, to equivocate C2, and to extract C4.

As before, we define a sequence of hybrid experiments user‐sec together with simulators S user‐sec and protocols user‐sec such that the first hybrid <sup>0</sup> is identical to the real experiment and the last protocol <sup>18</sup> is identical to the ideal experiment. The general proof strategy is


*ᵇ* Corrupted PoSes essentially have two options: They can either register "some" public key at the bulletin board or not. (N.b., the public key does not need to be honestly generated.) If they register their public keys, then they are regarded as registered from the perspective of the real protocols. Hence, the simulator must also register the PoSes with Fapc, otherwise Fapc would subsequently abort, but the real protocols do not.

Figure 8.20: The Simulator for User Security and Privacy (cont. from Fig. 8.19)


the first place. Contrary, if PoS had registed at the bulletin-board, the real protocol does not abort. However, in this case S user‐sec P5C would have (silently) defined ̄ keys(*pid*<sup>P</sup> ) and registered the PoS with Fapc and thus Fapc does not abort neither.

Figure 8.21: The Simulator for User Security and Privacy (cont. from Fig. 8.19)

Figure 8.22: The Simulator for User Security and Privacy (cont. from Fig. 8.19)

#### **Simulator** S user‐sec P5C **(cont.)**

IssueWallet (for honest operator and honest user): (1) Upon receiving leakage (issuing\_wallet) from Fapc and being asked to provide bl …

$$\text{(a)}\ \lambda'' \xleftarrow{\mathbb{R}} \mathbb{Z}\_{\mathfrak{p}}.$$

$$\begin{array}{ll} \text{(a)} \ \mathsf{A} \ \leftarrow \mathsf{L}\mathsf{e}\_{\mathsf{p}}. & \\ \text{(b)} \ \psi\_{\mathsf{bl}} \leftarrow \mathsf{ENC1.Enc}(\mathsf{pk}\_{\mathsf{DR}}, (\overbrace{1, \ldots, 1}^{\ell+2})). \end{array}$$

$$\text{(c)}\ \text{Set }\omega\_{\text{bl}} := (\lambda'', \psi\_{\text{bl}}).$$

(d) Provide bl to Fapc.

Figure 8.23: The Simulator for User Security and Privacy (cont. from Fig. 8.19)

Figure 8.24: The Simulator for User Security and Privacy (cont. from Fig. 8.19)

and RegisterDR previously, otherwise Fapc would already have aborted.

#### **Simulator** S user‐sec P5C **(cont.)**

IssueWallet (for corrupted operator and honest user, continued): (6) Upon receiving leakage (issuing\_wallet) from Fapc and being asked to provide bl … (a) Set bl ≔ (″, bl). (b) Provide bl to Fapc. (7) Upon receiving output (issued\_wallet, , bl) from Fapc for O …


Figure 8.25: The Simulator for User Security and Privacy (cont. from Fig. 8.19)

explained in Section 8.2 and is the same as in Section 8.3. We stress that some of the hybrids are nearly identical to those in Section 8.3. We proceed by giving concrete definitions of all hybrids user‐sec .

#### **Hybrid user‐sec** 0 **(The real experiment)** The hybrid user‐sec 0 is defined as

$$H\_0^{\text{user-sec}} := \mathsf{EXEC}\_{\pi\_0^{\text{user-sec}}, \mathcal{S}\_0^{\text{user-sec}}, \mathsf{Z}^{\text{user-sec}}}(1^n) \tag{8.33}$$

with S user‐sec <sup>0</sup> ≔ <sup>D</sup> being identical to the dummy adversary and user‐sec <sup>0</sup> ≔ P5C. Hence, user‐sec 0 denotes the real experiment.

**Hybrid user‐sec** 1 **(Fake setup)** In hybrid user‐sec <sup>1</sup> we modify S user‐sec 1 such that *crs*pok is generated by SetupSim,*crs* (2) com is generated byC2.SetupSim and *crs* (4) com is generated byC4.SetupExt. S user‐sec 1 initializes ̄ keys and ̄ pp as "empty" maps. Additionally, <sup>S</sup> user‐sec 1 invokes an internal instance of F sim msg instead of the external instance Fmsg and reroutes all input/output accordingly. All calls to the bulletin-board Fbb are handled internally by S user‐sec 1 using the map ̄ keys.

**Hybrid user‐sec** 2 **(Simulate honest keys)** Hybrid user‐sec 2 replaces the code in the tasks RegisterDR, RegisterOp, RegisterPOS and RegisterUser of the protocol user‐sec 2 such that the simulator S user‐sec 2 is asked for the keys instead. Also, if corrupted PoSes or users try to register a (maliciously) generated public key at the bulletin-board Fbb, then S user‐sec 2 calls RegisterPOS or RegisterUser, resp., in order to simultaneously register the parties for Fapc. S user‐sec 2 defines ̄ keys appropriately. This equals the method in which the keys are generated in the ideal experiment.

**Simulator** S user‐sec P5C **(cont.)** Deposit (for honest PoS and honest user): (1) Upon reveiving leakage (depositing, *pid*<sup>U</sup> , ds) from Fapc, … (a) Set (*pk*<sup>U</sup> ,*sk*U) ≔ ̄ keys(*pid*<sup>U</sup> ).*ᵃ* (b) Pick <sup>2</sup> <sup>R</sup>← ℤ . (c) Check if ∃ ′ ds = (′ , ′ , ′ 2 ) ∈ ds, s.t. ′ ds ≠ ⊥ and ′ <sup>2</sup> ≠ <sup>2</sup> hold.*ᵇ* (d) If yes, ≔ ′ + *sk*U(<sup>2</sup> − ′ 2 ). (e) Provide ≔ *sk*<sup>U</sup> to Fapc. (2) Upon receiving leakage (depositing, , , *pid*<sup>P</sup> ) or (depositing, , , *pid*<sup>P</sup> , ) from Fapc and being asked to provide (ds, rc, pp), … (a) Set (*pk*<sup>O</sup> , ⊥, ⊥) ≔ ̄ keys(*pid*<sup>O</sup> ).*ᶜ* (b) Parse *pk*rc,enc O from *pk*<sup>O</sup> . (c) Parse *pk*rc P /*sk*rc <sup>P</sup> and *pk*pp P /*sk*pp P from *pk*<sup>P</sup> /*sk*P. (d) If (, <sup>2</sup> ) is not yet defined, pick (, <sup>2</sup> ) <sup>R</sup>← ℤ<sup>2</sup> .*ᵈ* (e) Set ds ≔ (, , <sup>2</sup> ). (f) If has not been leaked (i.e., operator is honest): (i) Set rc to an arbitrary value from the correct space.*ᵉ* Else (i.e., operator is corrupted): (i) Set rc <sup>←</sup> SIG.Sign(*sk*rc <sup>P</sup> , (, , 1 )). (ii) Set rc ≔ (, , , *pk*rc P , rc). (g) Set rc <sup>←</sup> ENC2.Enc(*pk*rc,enc O , rc). (h) Run (*pk*<sup>U</sup> , ̄ *pk*<sup>U</sup> ) ← C2.CommitSim(*crs* (2) com). (i) Assign pp <sup>←</sup> SIG.Sign(*sk*pp P , *pk*<sup>U</sup> ). (j) Set pp ≔ *pk*<sup>U</sup> and pp ≔ (*pk*pp P , pp, ̄ *pk*<sup>U</sup> ) (k) Define ̄ pp(pp) ≔ (pp, <sup>1</sup> ). (l) Provide (ds, rc, pp) to Fapc. *ᵃ* N.b.: This assignment exists. An honest user must have called RegisterUser previously, otherwise Fapc would already have aborted. *ᵇ* N.b., even if the user commits double-spending no "useful", previous double-spending tag may exist in ds, if the user only did so at corrupted PoSes that undermine double-spending detection.

*ᶜ* N.b.: These assignments exist. The operator/PoS must have called RegisterOp/RegisterPOS previously, otherwise Fapc would already have aborted.

*ᵈ* Step 1d is only executed, if the user commits double-spending.

*<sup>ᵉ</sup>* The hidden recalculation tag is of the form rc = (, , , *pk*rc P , rc) ∈ <sup>1</sup> × <sup>1</sup> × ℤ × (<sup>2</sup> 1 × <sup>3</sup> 2 ) × (<sup>2</sup> 2 × <sup>1</sup> ), e.g., rc ≔ (1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1) would be a good choice.

Figure 8.26: The Simulator for User Security and Privacy (cont. from Fig. 8.19)

*ᵇ* Use empty set as blacklist.

*ᶜ* N.b.: This assignment exists. An honest user must have called RegisterUser previously, otherwise Fapc would already have aborted.

*ᵈ* N.b., even if the user commits double-spending no "useful", previous double-spending tag may exist in ds, if the user only did so at corrupted PoSes that undermine double-spending detection.

Figure 8.27: The Simulator for User Security and Privacy (cont. from Fig. 8.19)

Figure 8.28: The Simulator for User Security and Privacy (cont. from Fig. 8.19)

*ᵇ* N.b., even if the user commits double-spending no "useful", previous double-spending tag may exist in ds, if the user only did so at corrupted PoSes that undermine double-spending detection.

*ᶜ* N.b.: This assignment exists. The operator must have called RegisterOp previously, otherwise Fapc would already have aborted.

*ᵈ* Step 1d is only executed, if the user commits double-spending.

*<sup>ᵉ</sup>* The hidden recalculation tag is of the form rc = (, , , *pk*rc P , rc) ∈ <sup>1</sup> × <sup>1</sup> × ℤ × (<sup>2</sup> 1 × <sup>3</sup> 2 ) × (<sup>2</sup> 2 × <sup>1</sup> ), e.g., rc ≔ (1, 1, 0, 1, 1, 1, 1, 1, 1, 1, 1) would be a good choice.

Figure 8.29: The Simulator for User Security and Privacy (cont. from Fig. 8.19)


Figure 8.30: The Simulator for User Security and Privacy (cont. from Fig. 8.19)

*ᵃ* This assignment exist. (detecting\_ds, *pid*<sup>U</sup> ) is only leaked, if the user truly committed double-spending. In this case Step 5 in Fig. 8.27 and Step 4 in Fig. 8.30 have been called previously. In all other cases the honest user and therefore S user‐sec P5C knows *sk*<sup>U</sup> anyway.

Figure 8.31: The Simulator for User Security and Privacy (cont. from Fig. 8.19)

*ᵃ* This might hold, if bl is a simulated blacklisting tag for an honest user (cp. Step 1b in Fig. 8.23 or Step 4b in Fig. 8.24).

Figure 8.32: The Simulator for User Security and Privacy (cont. from Fig. 8.19)

*ᵇ* N.b.: Fapc asks for alternative *pid*<sup>U</sup> , , if and only if bl has not been recorded internally. I.e., for a simulated, but legitemately issued bl, which encrypts a "useless" 1-vector, S user‐sec P5C is not compelled to provide (*pid*<sup>U</sup> , ).

#### **Simulator** S user‐sec P5C **(cont.)**

RecalculateBalance (only for honest operator*ᵃ*): Upon receiving leakage (recalculating\_balance, *bl*, fake rc ) from Fapc and being asked to provide deviate … (1) fake rc <sup>≔</sup> {rc <sup>←</sup> ENC2.Dec(*sk*rc,enc O , rc) | <sup>|</sup> rc ∈ fake rc } (2) fake,valid rc <sup>≔</sup> {(, , , *pk*rc P , rc) ∈ rc | | SIG.Vfy(*pk*rc P , rc, (, , 1 )) = 1} (3) ≔ {(, ) | | ∃ rc = (, , , ⋅, ⋅) ∈ fake,valid rc ∧ ∈ *bl*} (4) Provide deviate <sup>≔</sup> <sup>∑</sup>(,)∈ to <sup>F</sup>apc. *ᵃ* This a local algorithm and hence there is nothing to simulate for a corrupted operator

Figure 8.33: The Simulator for User Security and Privacy (cont. from Fig. 8.19)

Note, the modifications of hybrid user‐sec 2 are identical to hybrid op‐sec 2 .

**Hybrid user‐sec** 3 **(Simulate PoS' certificate)** In hybrid user‐sec 3 the task CertifyPOS is modified. For an honest operator or an honest PoS the code of user‐sec 3 is replaced by the code of a dummy party. The simulator S user‐sec 3 behaves in this case as the final simulator S user‐sec P5C would. More precisely, if both parties are honest, protocol user‐sec 3 is modified such that the simulator S user‐sec 3 receives the message (certifying\_pos, *pid*<sup>P</sup> , P) and creates the certificate *cert*P. If the PoS is corrupted, but the operator honest, the certificate *cert*<sup>P</sup> is also created by simulator S user‐sec 3 . If the PoS is honest, but the operator corrupted, simulator S user‐sec 3 receives *cert*<sup>P</sup> as part of the message from the operator. In either case, simulator S user‐sec 3 learns *cert*<sup>P</sup> and internally records it in ̄ keys. Whenever the honest operator or honest PoSes running user‐sec <sup>3</sup> would send *cert*<sup>P</sup> (or cert P ) as part of their messages in the scope of IssueWallet or Deposit, they omit *cert*P. Instead, the simulator S user‐sec 3 injects *cert*<sup>P</sup> into the messages.

Note, except for the additional case that the PoS is honest and the operator corrupted, the modifications of hybrid user‐sec 3 are identical to hybrid op‐sec 3 .

**Hybrid user‐sec** 4 **(Extract serial number)** user‐sec <sup>4</sup> modifies the tasks of IssueWallet and Deposit in case of a corrupted operator/PoS. The code of user‐sec 4 for the user is modified such that it does not send ′ but randomly picks and sends it to S user‐sec 4 . Then S user‐sec 4 extracts ″ ← C4.Extract(*crs* (4) com, ″ ser), calculates ′ ≔ ⋅ (″) −1 and inserts ′ into the message from the user to the operator or PoS respectively.

**Hybrid user‐sec** 5 **(Simulate ZK-proofs)** This hybrid modifies user‐sec 5 such that the honest users do not send any proofs. Instead, the simulator S user‐sec 5 appends simulated proofs to the messages from the user to the operator or PoSes without knowing the witness.


Figure 8.34: The Simulator for User Security and Privacy (cont. from Fig. 8.19)

**Hybrid user‐sec** 6 **(Fake commitments for wallet ID and wallet components)** Hybrid user‐sec <sup>6</sup> modifies user‐sec 6 such that honest users do not send the commitments ′ wid, fix and upd in the IssueWallet and ′ upd in the Deposit task. Instead, S user‐sec 6 injects suitable commitments to vectors of zeros. This equals the behavior of the final simulator S user‐sec P5C .

**Hybrid user‐sec** 7 **(Record Tags)** user‐sec 7 replaces the code protocol user‐sec 7 of the tasks IssueWallet, Deposit and Disburse such that the various tags are not exclusively created by the parties' code but with support from S user‐sec 7 and then recorded by S user‐sec 7 . More precisely, these are

$$\alpha\_{\rm bl} := (\lambda'', \psi\_{\rm bl}) \tag{8.34} \tag{8.35a} \\ \qquad \qquad \qquad \qquad \qquad \omega\_{\rm ds} := (\varphi, t, \mathfrak{u}\_2) \tag{8.34}$$

$$
\omega\_{\rm rc} \leftarrow \mathsf{ENC2.Enc}(pk\_{\mathcal{O}}^{\rm rc,enc}, \psi\_{\rm rc}) \tag{8.35}
\\
\qquad \qquad \qquad \omega\_{\rm pp} \coloneqq c\_{\rm pk\_{\rm q}} \tag{8.35}
$$

To this end, user‐sec 7 and S user‐sec 7 are changed in detail as follows.


¹⁰ Note, that S user‐sec 7 knows *sk*<sup>U</sup> as the user is honest.

For the prove-participation tag pp: The code of the honest users is modified such that they ask S user‐sec 7 for the final pp which is then output by the users. The honest user does not send *pk*<sup>U</sup> anymore nor expects to receive pp. Instead, S user‐sec 7 creates *pk*<sup>U</sup> , *pk*<sup>U</sup> itself¹¹ and injects *pk*<sup>U</sup> into the message from the user to the PoS. When the PoS replies with pp, S user‐sec 7 compiles (pp, pp) ≔ (*pk*<sup>U</sup> , (*pk*pp P , pp, *pk*<sup>U</sup> )), internally records ̄ pp(pp) ≔ (pp, *pk*<sup>U</sup> ) and provides pp to the user. Moreover, when the honest user sends (pp, pp) in the scope of ProveParticipation, the honest user only sends pp and S user‐sec 7 injects pp using ̄ pp.

In summary, these modifications leak (, ) (for ds-tags) and—in case of a corrupted operator in Deposit—also (for rc-tags). This equals the behavior of the final Fapc (cp. Fig. 4.11, Step 10 and Fig. 4.12, Step 7).

On top, user‐sec 7 *provisionally* leaks ′ rc which is still honestly created by user‐sec 7 and simply mirrored back by S user‐sec 7 as rc. This over-leakage is reverted in hybrid user‐sec <sup>17</sup> . Also, S user‐sec 7 exploits the user's identity to create pp honestly, which the final simulator cannot do. This is repaired in hybrid user‐sec <sup>18</sup> .

**Hybrid user‐sec** 8 **(Decoupling the PRF)** This hybrid introduces a new incorruptible entity F‐rand into the experiment that is only accessible by honest users and the simulator through subroutine input/output tapes.¹² F‐rand provides the following functionality: Internally, F‐rand manages a partial map , mapping pairs of wallet IDs and counters to fraud-detection IDs. Whenever an as yet undefined value (, ) is required, F‐rand defines (, ) ≔ PRF(, ). If an honest user or the simulator requests a fraud-detection ID for (, ), F‐rand returns (, ).

**Hybrid user‐sec** 9 **(Create lookup table for double spending)** When S user‐sec 9 compiles the set ds within the scope of Deposit or Disburse (cp. hybrid user‐sec 7 ) and there exist matching double-spending tags ds, ′ ds ∈ ds, then set (*pid*<sup>U</sup> , ) ≔ OK with ≔ *sk*<sup>U</sup> to record this incident of double-spending as S user‐sec P5C would do.

**Hybrid user‐sec** 10 **(Utilize lookup tables for VerifyGuilt)** In case the calling party is honest, this hybrid is the same as hybrid op‐sec <sup>18</sup> . Otherwise this hybrid does not change anything.

**Hybrid user‐sec** 11 **(Utilize lookup tables forBlacklistWallet, forego decryption of blacklisting tags)** The dispute resolver *DR* becomes a dummy party and simply sends it input

¹¹ Note, that S user‐sec 7 because it simulates Fmsg internally and thus knows the identity of the user.

¹² I.e., communication is confidential, reliable and trustworthy. One might think of this entity as a preliminary version of the eventual ideal functionality.

(blacklist\_wallet, *pid*′ U ) to the simulator S user‐sec <sup>11</sup> in order to signal its consent to blacklist the user. The simulator S user‐sec <sup>11</sup> utilizes bl from hybrid user‐sec 7 and runs the joint code as the ideal functionality Fapc and S user‐sec P5C would do eventually. Especially, S user‐sec <sup>11</sup> checks −1 bl (bl) to decide whether bl is a genuine or a fake tag. If ≔ −1 bl (bl) is defined and hence denotes a genuine tag, simulator S user‐sec <sup>11</sup> does not decrypt bl, but used the recorded wallet ID and <sup>F</sup>‐rand to create the blacklist *bl* . If ≔ −1 bl (bl) is undefined and hence denotes a fake tag, simulator S user‐sec <sup>11</sup> decrypts bl as the real dispute resolver would do. If the decrypted user ID *pid*<sup>U</sup> denotes a corrupted user, simulator S user‐sec <sup>11</sup> creates a blacklist *bl* using the real code. Especially simulator S user‐sec <sup>11</sup> does not call F‐rand, but directly uses the PRF to obtain a list of fraud-detection IDs *bl* ≔ {,0, … , ,bound } for the decrypted wallet ID . If the decrypted user ID *pid*<sup>U</sup> denotes an honest user, simulator S user‐sec <sup>11</sup> creates a blacklist *bl* uses F‐rand.

**Hybrid user‐sec** 12 **(Utilize lookup tables for RecalculateBalance, forego decryption of recalculation tags)** When the task RecalculateBalance is invoked, S user‐sec <sup>12</sup> partitions the set of recalculation tags rc into two set genuine rc and fake rc the same way as Fapc would do. Recalculation tags rc ∈ genuine rc are not decrypted, but S user‐sec <sup>12</sup> uses the serial number and price of the original transaction to create a set genuine ≔ {(, )}. Recalculation tags rc ∈ fake rc are still decrypted, their signature is checked for validity and fake ≔ {(, )} is compiled from the decrypted values. Then the balance is calculated as bill <sup>≔</sup> <sup>∑</sup>(,)∈genuine + <sup>∑</sup>(,)∈fake .

This behavior equals the joint behavior of Fapc and the final simulator S op‐sec P5C (cp. Figs. 4.16 and 8.33).

The modifications of hybrid user‐sec <sup>12</sup> are identical to those of hybrid op‐sec <sup>20</sup> .

**Hybrid user‐sec** 13 **(Utilize lookup tables for ProveParticipation, forego unveil of proveparticipation tags)** This hybrid utilizes ̄ pp to link legitimately issued prove-participation tags to their origin. The code of the honest users is modified such that they do not send (pp, pp) but only pp. Also, the users do not internally test, if pp is one of their own proveparticipation tags, but simply forward them as a dummy party would do. If the result is positive, S user‐sec <sup>13</sup> looks up the corresponding pp in ̄ pp and simulates the message (pp, pp). Note, pp are not yet equivocated (as the final simulator would do), but S user‐sec <sup>13</sup> sends the original pp that have been recorded in hybrid user‐sec 7 .

Note that hybrid user‐sec <sup>13</sup> is a simplified variant of hybrid op‐sec <sup>23</sup> . In hybrid user‐sec <sup>13</sup> , only honest users interacting with a corrupted violation enforcer need to be considered. If the user was corrupted, the violation enforcer would have to be corrupted, too, as required by Theorem 8.28.

**Hybrid user‐sec** 14 **(Fake blacklisting tags for honest users)** The code user‐sec <sup>14</sup> for honest users in the scope of IssueWallet is modified such that they do not send bl. Instead, S user‐sec 14

returns bl ≔ (″, bl) with bl <sup>←</sup> ENC1.Enc(*pkDR*, (1, … , 1)), when <sup>O</sup> asks for a bl (cp. hybrid user‐sec 7 ).

**Hybrid user‐sec** 15 **(Use truly random fraud-detection IDs)** Hybrid user‐sec <sup>15</sup> replaces the PRF inside F‐rand by truly random values. Whenever an as yet undefined value (, ) is required, F‐rand independently and uniformly draws a fresh random fraud-detection ID and sets (, ) ≔ .

**Hybrid user‐sec** 16 **(Fake double-spending tags for honest users)** The code user‐sec <sup>16</sup> for honest users in the scope of Deposit and Disburse is modified such that they do not send a real DS response . When the operator asks for double-spending tag (cp. hybrid user‐sec 7 ), the simulator S user‐sec <sup>16</sup> proceeds as follows. If no (, ′ , ′ 2 ) ∈ ds has been recorded previously, S user‐sec <sup>16</sup> picks <sup>R</sup>← ℤ randomly. This equals the behavior of the final simulator S user‐sec P5C .

Note, the modifications of this hybrid are identical to hybrid op‐sec <sup>25</sup> .

**Hybrid user‐sec** 17 **(Fake recalculation tags for honest users)** The code user‐sec <sup>17</sup> for honest operator/PoS in the scope of Deposit and Disburse abandons the over-leakage of ′ rc that has provisionally been introduced by hybrid user‐sec 7 . When they ask for rc the simulator does not simply reflect rc ≔ ′ rc, but instead creates rc on its own. The simulator does so in two different ways, depending on the corruption status of the operator.

If the operator is corrupted,¹³ the simulator creates rc ≔ (, , , *pk*rc P , rc) with rc ← SIG.Sign(*sk*rc <sup>P</sup> , (, , 1 ))faithfully and provides a true encryption rc <sup>←</sup> ENC2.Enc(*pk*rc,enc O , rc). We stress that S user‐sec <sup>17</sup> knows all relevant information , , *pid*<sup>P</sup> and due to the leakage introduced by hybrid user‐sec 7 .

If the operator is honest, S user‐sec <sup>17</sup> provides an encryption rc <sup>←</sup> ENC2.Enc(*pk*rc,enc O , rc) for an arbitrary rc from the correct space.

Note, the modifications of this hybrid are identical to hybrid op‐sec <sup>26</sup> .

**Hybrid user‐sec** 18 **(Fake prove-participation tags for honest users)** The hybrid user‐sec 18 modifies Deposit and ProveParticipation.

In Deposit the simulator S user‐sec <sup>18</sup> does not use the user's identity to create (pp, pp). Instead, S user‐sec <sup>18</sup> simulates the commitment as (*pk*<sup>U</sup> , ̄ *pk*<sup>U</sup> ) ← C2.CommitSim(*crs* (2) com). S user‐sec <sup>18</sup> sets pp ≔ *pk*<sup>U</sup> and pp ≔ (*pk*pp P , pp, ̄ *pk*<sup>U</sup> ), returns pp and defines ̄ pp(pp) ≔ (pp, <sup>1</sup> ).

Moreover, the code for ProveParticipation in case of an honest user and a corrupted violation enforcer is adapted (cp. hybrid user‐sec <sup>13</sup> ). After <sup>S</sup> user‐sec <sup>18</sup> has looked up the corresponding

¹³ N.b., for operator security the operator is always honest, i.e. this case never holds. However, we explicitly consider this case here, as this allows us to reuse this hybrid as hybrid user‐sec <sup>17</sup> to prove user security.

pp, <sup>1</sup> ) ≔ ̄ pp(pp), but before sending pp to <sup>Z</sup>op‐sec playing the corrupted *VE*, <sup>S</sup> user‐sec 18 parses (*pk*pp P , pp, ̄ *pk*<sup>U</sup> ) ≔ pp), equivocates the decommitment *pk*<sup>U</sup> ← C2.Equivoke(*crs* (2) com, *pk*<sup>U</sup> , *pk*<sup>U</sup> , ̄ *pk*<sup>U</sup> ), redefines pp ≔ (*pk*pp P , pp, *pk*<sup>U</sup> ) and then sends pp.

Again, this equals the behavior of the final simulator S op‐sec P5C .

As before, the proof of Theorem 8.28 is shown by the pairwise indistinguishability of subsequent hybrids. Most of the proofs have already been shown in a similar vein in Section 8.3. In those cases, the proofs are either only sketched or the reader is referred to the corresponding proof in the previous section.

**Lemma 8.29 (Indistinguishability between user‐sec** 0 **to user‐sec** 4 **, user‐sec** 6 **to user‐sec** 10 **, user‐sec** 11 **to user‐sec** 14 **, as well as user‐sec** 15 **to user‐sec** 18 **, resp.)** *Under the assumptions of Theorem 8.28,* user‐sec 0 c ≡ ⋯ c ≡ user‐sec 4 *,* user‐sec 6 c ≡ ⋯ c ≡ user‐sec <sup>10</sup> *,* user‐sec 11 c ≡ ⋯ c ≡ user‐sec <sup>14</sup> *, and* user‐sec 15 c ≡ ⋯ c ≡ user‐sec <sup>18</sup> *, resp. holds.*

Proof The indistinguishability user‐sec 0 c ≡ user‐sec 1 is proven similar to the proof of Lemma 8.4. However, with respect to the CRS of the NIZK scheme the composable zero-knowledge property of Definition 6.9 has to be used.

The modifications within the sequence of hybrids user‐sec 1 c ≡ user‐sec 2 c ≡ user‐sec 3 and user‐sec 6 c ≡ user‐sec 7 c ≡ user‐sec 8 c ≡ user‐sec 9 are only syntactical. Therefore, the same argument as for Lemma 8.5 applies. Note, the tentative functionality F‐rand which is inserted by the hop from user‐sec 7 to user‐sec 8 is inaccessible by Zuser‐sec and still uses the real PRF to generate fraud-detection IDs.

The hop from user‐sec 3 to user‐sec 4 does not change anything from the perspective of Zuser‐sec as C4 is perfectly *gp*-extractable (cp. Definition 6.11, Item (4)).

The hop from user‐sec 9 to user‐sec <sup>10</sup> is identical to the hop from hybrid op‐sec <sup>17</sup> to op‐sec <sup>18</sup> . See proof of Lemma 8.18.

The chain of indistinguishable hybrids user‐sec 11 c ≡ user‐sec 12 c ≡ user‐sec 13 c ≡ user‐sec <sup>14</sup> corresponds to op‐sec 19 c ≡ op‐sec 20 c ≡ op‐sec 23 c ≡ op‐sec <sup>24</sup> . See Lemmas 8.20, 8.23 and 8.24 for the proofs. Note that for the hop from user‐sec <sup>12</sup> to user‐sec <sup>13</sup> only the case of honest users needs to be considered in Lemma 8.23. If the user was corrupted, the violation enforcer would have to be corrupted, too, due to the restrictions imposed by Theorem 8.28.

The sequence user‐sec 15 c ≡ user‐sec 16 c ≡ user‐sec 17 c ≡ user‐sec <sup>18</sup> is identical to op‐sec 24 c ≡ op‐sec 25 c ≡ op‐sec 26 c ≡ op‐sec <sup>27</sup> and proven by Lemmas 8.25 to 8.27.

**Lemma 8.30 (Indistinguishability between user‐sec** 4 **and user‐sec** 5 **)** *Under the assumptions of Theorem 8.28,* user‐sec 4 c ≡ user‐sec 5 *holds.*

Proof This hop replaces the real proofs by simulated proofs. To show indistinguishability despite this change, we actually have to consider a sequence of sub-hybrids—one for each of the different ZK proof systems P1, P2 and P3. In the first sub-hybrid all proofs for P1 are replaced by simulated proofs, in the second sub-hybrid all proofs for P2 are replaced and finally all proofs for P3. Assume there exists Zuser‐sec that notices a difference between user‐sec 4 and the first sub-hybrid. Then we can construct an adversary B that has a non-negligible advantage Advpok−zk POK,B (). Internally, B runs Zuser‐sec and plays the protocol and simulator for Zuser‐sec . All calls of the simulator to P1.Prove are forwarded by B to its own oracle in the external challenge game which is either P1.Prove or P1.ProveSim. B outputs whatever Zuser‐sec outputs. The second and third sub-hybrid follow the same line, but this time B internally needs to generate simulated proofs for the proof system that has already been replaced in the previous sub-hybrid. As B gets the simulation trapdoor as part of its input in the external challenge game, B can do so.

**Lemma 8.31 (Indistinguishability between user‐sec** 5 **and user‐sec** 6 **)** *Under the assumptions of Theorem 8.28,* user‐sec 5 c ≡ user‐sec 6 *holds.*

Proof In this hop the commitments ′ wid, fix, upd and ′ upd are replaced with commitments to zero-messages for every honest user. Again, the hop from user‐sec 5 to user‐sec 6 is further split into a sequence of sub-hybrids with each sub-hybrid replacing a single commitment in reverse order of appearance. Assume Zuser‐sec can distinguish between user‐sec 5 and user‐sec <sup>6</sup> with non-negligible advantage. This yields an efficient adversary B against the hiding property of C1. Please note that none of the commitments are ever opened, hence in each sub-hybrid only a single message is replaced. Internally, B runs Zuser‐sec and plays the role of all parties and the simulator for Zuser‐sec. Externally, B plays the hiding game. First, B guesses the index of the sub-hybrid which lets Zuser‐sec distinguish. For the first ( − 1) commitments, B commits to the true message. For the th commitment, B sends the actual message and an all-zero message to the external challenger. B embeds the external challenge commitment (either to the actual message or the all-zero message) as the th commitment. All remaining commitments are replaced by commitments to zeros. B outputs whatever Zuser‐sec outputs.

#### **Lemma 8.32 (Indistinguishability between user‐sec** 10 **and user‐sec** 11 **)** *Under the assumptions of Theorem 8.28,* user‐sec 10 c ≡ user‐sec <sup>11</sup> *holds.*

Proof This hop is perfectly indistinguishable from the environment's perspective as the modifications made by hybrid user‐sec <sup>11</sup> do not change the output. Note that the dispute resolver is always honest. At the bottom line, identicalness of the outputs follows from the correctness of ENC1. If the operator is honest, too, the argument from Lemma 8.19 applies. If the operator is corrupted, the operator might send a blacklisting tag bl which is not genuine. In this case, bl is still decrypted as the real dispute resolver would do. Note that F‐rand still uses the PRF internally, hence the resulting blacklist is perfectly indistinguishable from the previous hop, no matter whether the user under consideration is corrupted or not.

**Lemma 8.33 (Indistinguishability between user‐sec** 14 **and user‐sec** 15 **)** *Under the assumptions of Theorem 8.28,* user‐sec 14 c ≡ user‐sec <sup>15</sup> *holds.*

Proof In this hop the pseudo-random fraud-detection IDs for honest users are replaced by uniformly drawn random IDs. Again, we proceed by introducing a sequence of sub-hybrids. In each sub-hybrid the fraud-detection IDs for one particular wallet ID are replaced. If Zuser‐sec can distinguish between two of the sub-hybrids, this immediately yields an efficient adversary against the pseudo-random game as defined in Definition 6.17. Internally, B runs Zuser‐sec and plays the protocol and simulator for Zuser‐sec. Externally, B interacts with an oracle that is either a true random function (⋅) or a pseudo-random function PRF( ̂ , ⋅) for an unknown seed ̂ . Whenever <sup>B</sup> playing <sup>F</sup>‐rand internally needs to draw a fraud-detection ID for the particular wallet , B uses its external oracle. B outputs whatever Zuser‐sec outputs. Please note, this argument crucially uses the fact that Zuser‐sec is information-theoretically independent of . The blacklisting tags bl have already been replaced by encryptions of 1-vectors in the previous hybrid user‐sec <sup>14</sup> . This enables the external challenger to pick any seed ̂ .

Again, we conclude this section by gathering the results and repeating the initial theorem.

#### **Theorem 8.28 (User Security and Privacy)** *Under the assumptions of Theorem 8.1*

$$
\pi\_{\text{P\S C}}^{\mathcal{F}\_{\text{CRS}}, \mathcal{F}\_{\text{bb}}, \mathcal{F}\_{\text{ms}}} \geq\_{\text{UC}} \mathcal{F}\_{\text{apc}} \tag{8.32}
$$

*holds under static corruption of*


Proof A direct consequence of Lemmas 8.29 to 8.33.

## **9 Performance Evaluation**

In order to evaluate the practicality of P5C, we reconsider the performance figures from [Nag+17; Nag+20]. Please note, that in neither case the exact protocol as presented here is implemented. In [Nag+17] BBA+ lacks many of the functional improvements, especially the blacklisting mechanism. Therefore, no costly range proofs to escrow the secret wallet ID are necessary during IssueWallet. Moreover, BBA+ does not support user/PoS attributes. Hence, the message sizes and zero-knowledge proofs are smaller. In [Nag+20] a scheme which includes all functional features has been implemented and thus it is very close to the scheme presented in this thesis. Still, the belated fixes which have been introduced by this thesis are missing. However, the fixes have not changed the computationally costly zero-knowledge proofs and thus should only have little impact on the performance figures.

In summary, the following implementation figures have to be taken with a pinch of salt.

## **9.1 Hardware**

As to the hardware, the users, the PoSes and the remaining parties, mostly the operator but also the dispute resolver and violation enforcer have to be considered separately. For the latter group it is reasonable to assume that they may use typical PC hardware, or—if it was necessary—reasonably powerful workstation/server hardware. In [Nag+17; Nag+20] the runtime of the operator (also known as toll service provider in [Nag+20]) is measured on a standard laptop featuring an i7-6600U processor for simplicity. In contrast, users and PoSes are typically equipped with hardware which only offers lower computational powers, because it has to be mobile, is deployed in the field or embedded into another system.

For BBA+ [Nag+17] the authors consider a pre-payment system or customer loyalty program. Hence users are assumed to be individuals who use their smartphones to manage their wallets. In [Nag+17] the user side has been implemented on a OnePlus 3 smartphone. It features a Snapdragon 820 Quad-Core processor (2 × 2.15 GHz & 2 × 1.6 GHz), 6 GB RAM and runs Android OS v7.1.1 (Nougat).

For the feature-complete scheme in the ETC setting [Nag+20] the user side correspond to vehicles. The user side has been measured on an evaluation board that features an i.MX6 Dual-Core processor running at 800 MHz with 1 GB DDR3 RAM and 4 GB eMMC Flash. The processor runs an embedded Linux, is ARM Cortex-A9 based (32-bit), and also exists in a more powerful Quad-Core variant. The same processor is used in real vehicles as part of the Savari MobiWAVE-1000 on-board unit [Sav17]. For the PoS hardware, which corresponds to toll gantries, we take the ECONOLITE Connected Vehicle Coprocessor Module as a reference system, which was specifically designed to enable third-party-developed or processor-intensive applications [ECO18] and measured on comparable hardware.

## **9.2 Parameter Choice and Instantiation of Setup Assumptions**

As for the bilinear group setting, we use the Barreto-Naehrig curves Fp254BNb and Fp254n2BNb [BN06; Kaw+16] and the optimal Ate pairing since this choice results in the shortest execution times [Moo+15]. This yields a security level of about 100 bit [BD17].

In [Nag+20] the scheme is evaluated for two sizes of attribute vectors: |U| = |P| = 1 and |U| = |P| = 4. With curves of 254-bit order, each vector component can encode up to 253 bits of arbitrary information. In practice, it should be possible to encode multiple attributes into one such component.

The secure messaging functionality of Fmsg to securely exchange protocol messages has been realized by the IND-CCA-secure encryption scheme from [CKS08] in combination with AES-CBC and HMAC-SHA256. The remaining two setup assumptions FCRS and Fbb have not been implemented as independent components, but hard-coded. This is reasonable for the CRS which becomes a fixed system parameter after it has been generated trustworthily and standardized once. Using a static list of keys for Fbb is viable for a testbed, but has obviously to be replaced by a key registration service in reality. Please note, that the latter has no impact on the runtime measurements which are considered here. The remaining building blocks have been instantiated as in Section 6.2.

## **9.3 Tool Chain, Libraries and Optimizations**

The scheme is implemented C++17 using the RELIC toolkit v.0.4.1, an open source cryptography and arithmetic library written in C, with support for pairing-friendly elliptic curves [AG16]. We developed our own library for Groth-Sahai NIZK proofs [EG14; GS08] and employed the method in [CCs08] to realize the range proofs. In order to utilize the capabilities of our hardware, the user side algorithms were optimized for two CPU cores. We also optimized the computations performed by the operator/PoS, taking advantage of the four CPU cores and the batching techniques for Groth-Sahai verification by Herold et al. [Her+17].


Runtime is averaged over 1 000 executions. Transmitted data is rounded up to full bytes.

Table 9.1: Performance results of [Nag+20]

## **9.4 Implementation Results**

In this thesis we only reconsider the most important results and concentrate on the main tasks which include the expensive NIZKs. Table 9.1 shows the results of our measurements for the feature-complete scheme in the ETC setting in terms of execution time and transmitted data.

The performance of the task IssueWallet is dominated by the key escrow mechanism which requires to split the secret wallet ID into a -ary representation and proof its correctness. This has a major impact on the zero-knowledge proof both in terms of runtime and size. This is also reflected by the fact that the number of attributes has only a slight effect. At first glance, a runtime of roughly 35 seconds appears impractical. However, the task is not time-critical and only needs to be executed once per wallet.

Also, the task Deposit is dictated by the zero-knowledge proof. Without any further optimizations the task would require (2 749 + 348 =) 3 097ms at the user side and 475ms at the PoS side or 3 572ms in total. Clearly, this is too long for practical use. Fortunately, all parts of the expensive NIZK proof which are independent of the challenge value <sup>2</sup> can be precomputed. This includes all but one equation of the zero-knowledge language. Also assembling the updated wallet after the last message has been exchanged can be moved to a post-processing phase. This way the computation time at the user side can be reduced to 348ms between the first and the last message. When caching valid PoS certificates, the runtime can be further reduced to approximately 40 ms. The computation time at the PoS is dominated by the verification of the NIZK and thus cannot be outsourced. In summary, all computations in the online phase of Deposit can be performed in about 515ms.


Table 9.2: Performance results of [Nag+17]

Table 9.2 shows the performance results of BBA+ from [Her+17]. First note, that BBA+ [Nag+17] has no support for user attribute vectors and also does not use PoS certificates. This means the results needs to be contrasted with the results for one attribute and cached certificates in Table 9.1.

The task IssueWallet is faster by magnitudes, because BBA+ has no blacklisting mechanism and thus does not need to perform a costly range proof to escrow the wallet ID. This is reflected in runtime and message size. The combined runtime for the user and operator is 218ms vs. 35 s and the combined message size is 1 kB vs. 89 kB.

The performance of the online phase of Disburse for BBA+ [Her+17] is approximately in the same scale as for the ETC system in [Nag+20]. The computation time is 76ms vs. 41ms at the user side and 436ms vs. 475ms at the PoS side. The differences at the user is due to the use of different hardware, i.e. a smartphone vs. an OBU evaluation board. The amount of transfer data in [Her+17] is approximately half of the amount in [Nag+20]. Remember, that [Her+17] is missing some features. Hence, no prove-participation tag is sent and the NIZK is smaller, because fraud-detection IDs are not images of a PRF but randomly drawn and the NIZK lacks the attribute vectors of the user and the previous PoS.

Also, Herold et al. show performance results for a different variant of the task Disburse. In our running prime example, Disburse simply unveils the current balance of the wallet. This also matches the scenario in [Nag+20]. In this case the performance of Disburse is approximately the same as for Deposit. Alternatively, Disburse may use range proofs to show that the wallet contains sufficient funds. Typically, pre-payment scenarios benefit from the higher privacy and are discussed in Section 2.3.2. In this case, the runtime increases by a factor of eight at the user side and nearly doubles at the PoS side. Also, the amount of data which is sent by the user increases by a factor of three.

#### **9.4.1 Storage Requirements**

The storage requirements are of no concern with respect to today's hardware. The wallet itself consumes 1 kB of memory and is fixed in size. During Deposit, the user and the PoS collect data in order to, e.g., prevent double-spending or to prove participation in a protocol run. In Deposit, the user has to store 137 bytes of transaction information and (optionally) 268 bytes to cache the PoS certificate for later re-use. Assuming that users perform 10000 transactions in one billing period, they have to store 1.37 MB of transaction information and, even if all visited PoSes were different, 2.68 MB of cached certificates.

A PoS has to store 246 bytes of transaction information after each run of Deposit for the double-spending tags, prove-participation tags and recalculation tags. All this information is eventually aggregated at the operator's database. Even for large scale deployments with hundreds of millions of transactions per month, the resulting database would only consume a few gigabytes.

#### **9.4.2 Computing DLOGs**

To blacklist a user, the dispute resolver has to compute a number of discrete logarithms to recover the wallet ID . With our choice of parameters, is split into 32-bit values, thus resulting in the computation of eight 32-bit DLOGs. While DLOGs of this size can be bruteforced naively, the technique of Bernstein et al. [BL12] can be used to speed up this process. Using their algorithm, computing a discrete logarithm in an interval of order 2³² takes around 1.5 seconds on a single core of a standard desktop using a 55 kB table of precomputed elements. These precomputations need to be done only *once* by the dispute resolver when setting up the system and take one hour on a desktop computer. Thus, the required DLOGs can be computed in reasonable time by the dispute resolver.

# **10 Summary, Open Problems and Future Work**

The final chapter of this thesis serves two purposes. First some improvements of the definition of Fapc and the scheme P5C are discussed. Section 10.1 deals with smaller improvements. These improvements are rather straight-forward and have been discovered during the writing of this thesis, but have not found their way into the final version. Section 10.2 discusses what has to be changed to enable simulation-based security not only under restricted sets but also under arbitrary corruption of the parties. This change comes at the cost of less efficiency. Finally, Section 10.3 concludes the thesis and give some pointers to future work beyond the rather simple improvements which already has been discussed.

## **10.1 Minor Improvements**

The following three minor improvements are not expected to cause any problems or provide any new insights. The first one only affects a seemingly inconsequential design decision that has been (badly) determined in a very early stage and pervades the whole system. Hence, it has turned out not to be fixable without much labor in exchange for very little benefit. The other two improvements have been unveiled when the synchronization of the transaction tags has been formalized explicitly.

#### **10.1.1 Wallet Handles**

The first improvement removes the serial number from all input/output in all tasks. Instead, a newly introduced *wallet handle*—which must not be confused with the (secret) wallet ID —is used where needed. The serial number is given as output to the user only to enable several wallets per user and to provide the user with an option to denote which wallet should be used in a particular task. But the serial number is actually too much. It does not only denote a wallet but a whole wallet state and thus also empowers formally honest users to commit double-spending. This leads to the awkward distinction between honest and well-behaving users on the one hand side and honest but misbehaving users on the other hand side.

Although outputting the (secret) wallet ID to the user seems to be what is wanted, this does not work out, because it would allow the environment to evaluate the PRF and check for real fraud-detection IDs vs. ideal fraud-detection IDs. At the bottom line, this is the same problem as in the formalization of symmetric encryption in UC. There, the (secret) encryption key must not be output to the environment, as the environment could distinguish encrypted messages from random simulations, but still an option to select which key shall be used is required. The solution is to introduce wallet handles which are truly random numbers and map one-to-one and onto the underlying wallet ID, but are information-theoretically independent of the frauddetection IDs. More precisely, at the end of IssueWallet the user (and only the user, not the operator) obtains a wallet handle that is internally drawn by Fapc and mapped to the wallet ID. The user re-inputs the wallet handle into Deposit and Disburse. Internally, Fapc looks up the latest state of the corresponding wallet. This way honest users are also unable to commit double-spending. Still double-spending is possible, but the user must be formally corrupted first. For the latter, Fapc asks the adversary to provide the serial number of an alternative wallet state, if the user is corrupted. Hence, the distinction between well-behaving and misbehaving (honest) users can be dropped.

The introduction of wallet handles allows to get rid of some inelegant leakage. In IssueWallet and Deposit the Fapc explicitly leaks the serial number to the adversary. Although this is not a serious problem, because is a random number, the only reason for the leakage is to enable the simulation of a Blum cointoss which is consistent to the later output. If the serial number is not output, the Blum cointoss can be simulated blindly. This also applies to some other tasks.

All in all, this modification would lead to a cleaner, more concise and more "semantical" interface. However, the change is not only a cosmetic one. The observation of the previous paragraph with respect to the Blum cointoss is also a key element for the extension to fullfledged security under arbitrary corruption (cp. Section 10.2).

#### **10.1.2 Recalculation Tags**

The next improvement affects the recalculation tags rc. As stated in Sections 4.4.3 and 7.4.3 only very little guarantees are provided. The operator must rely on the PoSes that they provide correct and complete sets of recalculation tags. Although, the hidden recalculation tag rc ≔ (, , , *pk*rc P , rc) is signed by the PoS this does not ensure that the signed price is actually the same price as used in the transaction. Also, corrupted PoSes can create additional recalculation tags or drop them.

As an intermediate step, the hidden recalculation tag rc could be sent to the user. This way, the PoS is deterred from dropping a recalculation tag, because the user owns a copy which is validly signed by the PoS. The corrupted PoS can still put a different price into its copy of the

recalculation tag, but the user can check this and immediately file a claim out-of-band. This intermediate step would likely increase the security level in a "practical" sense, but cannot be formally captured by the model and a corrupted PoS can still inject additional recalculation tags.

A more comprehensive solution would also make the user sign the recalculation tag. This way, the PoS cannot unilaterally alter the price later and also not create fake tags. However, this solution comes with two obstacles.

A straightforward signature rc U by the user contradicts the user's privacy in Deposit as the PoS somehow needs to check its validity. Instead, the user is equipped with a signing key pair (*pk*rc U ,*sk*rc <sup>U</sup>) whose public part *pk*rc U needs to be certified, i.e. signed by the operator, similar to what is done in CertifyPOS for PoSes. This could either be part of IssueWallet or an independent task. Under the assumption that the signing scheme has the non-standard, but quite natural security property that a pure signature rc U does not unveil anything about the public key *pk*rc U under which it is valid, the following approach is possible. The user signs the recalculation tag and sends the signature rc U together with a NIZK proof rc that the signature is valid under an (anonymous) user key which again is validly signed by the operator. Then the hidden recalculation tag is extended to rc ≔ (, , , *pk*rc P , rc, rc P , rc U ) with rc P = rc denoting the PoS signature as before. Please note, that this is very closely related to the concepts of group or ring signatures. If the signature scheme unveils the public key for which it is valid, then the signature rc U can additionally be encapsulated in a commitment. We stress that it does not suffice, if the participating PoS is convinced that the user has signed the recalculation tag, but the operator who collects all recalculation tags later, needs to be convinced, too.

Unfortunately, this comprehensive solution does not only increase the computational complexity of Deposit but also requires an additional round of communication. The user can only create rc U after the price has been learned. At the current state, Deposit sends as part of the last (i.e. third) message from the PoS to the user (cp. Fig. 7.20). This message also sends the updatable commitment upd and the associated signature upd. These components must remain in the last message, as otherwise a malicious user could run away with a new wallet before Deposit is completed. Hence, it is not admissible to only add one additional message in which the user sends (rc, rc U ) at the end, but instead the currently third message becomes the fifth message, the price is pushed forward to a newly added third message and the user sends (rc, rc U ) in the newly added forth message.

Finally, the task RecalculateBalance needs to be extended into a two-party task involving the user. The user and the operator both input their set of recalculation tags and both sets are united. This way, neither side can drop a recalculation tag. For the case that the user does not agree to participate, RecalculateBalance can still be run by the operator alone using the empty set ∅ for the user's input.

Skipping ahead, the merger of the improved recalculation tag with the improved proveparticipation tag (cp. next section) into a joint *receipt tag* seems appealing, because both share overlapping information and thereby modeling a true digital counterpart of a physical invoice. However, this is only syntactical embellishment.

#### **10.1.3 Prove-Participation Tags**

The prove-participation tags exhibit a practical problem that is very similar to the afore discussed recalculation tags. The PoS which has triggered the violation enforcer to physically identify a user is the same PoS which also provides a set of prove-participation tag pp to the violation enforcer. This allows a PoS to intentionally embezzle the relevant prove-participation tags which are connected to the incident and thereby disable the user to refute the accusation (cp. Sections 4.4.4 and 7.4.4). Note, that the hidden prove-participation tag pp ≔ (*pk*pp P , pp, *pk*<sup>U</sup> ) already contains a signature on the prove-participation tag pp ≔ *pk*<sup>U</sup> by the PoS. At least, this allows users to prove that they have participated in *some* transaction with the accusing PoS, but it does not prove that this has been the specific transaction under investigation.

In a former approach the serial number has been part of pp, but is as random as the commitment value *pk*<sup>U</sup> , does not establish a connection to the transaction and thus does not solve the problem. To solve the problem once and for all, the violation enforcer needs to be able to *autonomously* relate the physical identification (e.g. a photo) to some information about the transaction without relying on the PoS to assist with this mapping honestly.

In practice, the solution is quite easy. An improved prove-participation tag would encode the actual whereabouts of the transaction including a location, which is already given by the PoS' identity and position, and a timestamp. This timestamp would also be included in the photo and thus could be matched. A timestamp is an example of a "publicly verifiable information" (cp. Section 2.4). Each party has its own time source which it trusts. Depending on the scenario and the frequency in which a user participates in a transaction with the same PoS the timestamps only need to match approximately. A practical solution would be as follows: The user commits to its public key *pk*<sup>U</sup> as before and sends *pk*<sup>U</sup> to the PoS together with its own timestamp *ts*U. If *ts*<sup>U</sup> is sufficiently accurate, i.e. within a specified distance from *ts*P, the PoS signs the tuple (*pk*<sup>U</sup> ,*ts*U) and sends pp back to the user. If anything fails, the PoS triggers the violation enforcer (as before) which takes a photo and equips it with its own timestamp *tsVE*. Later, the physically identified user is challenged on (*pid*<sup>P</sup> ,*tsVE*). If the user can present a proveparticipation tag which is signed by the correct PoS, has a timestamp *ts*<sup>U</sup> close to *tsVE* and can be unveiled to the user's own public key *pk*<sup>U</sup> , then the user is discharged. Note, that this way the violation enforcer does not need any input from the PoS.

Unfortunately, this apparent solution cannot easily be modeled in the UC framework, although the basic idea of Fts seems easy. The hybrid FCRS,Fbb,Fmsg P5C is augmented by an ideal

timestamping functionality Fts. Fts is a -ary functionality (for arbitrary ) that upon request outputs the same timestamp to all parties. Within the scope of the (abstract) model a simple integer that is increased for each request suffices as a timestamp. As each of the parties per request gets the identical timestamp, we do not need to deal with (real-world) inaccuracies neither. The main problem is the formalization of Fts and is more involved than it may seem. UC is inherently asynchronous and message driven, i.e. to be accurate Fts cannot output to parties at once, but looses its activation after each output to a single party. Moreover, the adversary is not obliged to re-schedule Fts right away, but may let other parties run first. This also accounts to the problem that Fts cannot know if the first parties which request Fts for a timestamp actually belong to the same transaction and thus should receive the same timestamp or if these parties belong to different transactions and therefore some of them should receive a different timestamp. These problems can be overcome ([Kat+13]), but the formalization is surprisingly intricate.

As already said in the previous section, the improved prove-participation tags and the improved recalculation tags suggest themselves to be combined into on sort of tag, as they share a lot of identical information after the extension.

## **10.2 Towards Full-Fledged Corruption**

In Chapter 8 P5C is proven to be secure UC-realization of Fapc for restricted sets of corruption. We observe, that full-fledged indistinguishability is possible, if an extractable commitment scheme is used.

Let's temporarily ignore the commitment scheme C4 for the serial number in IssueWallet and the Blum cointoss as well as the the commitment scheme C2 used by the prove-participation tags to hide the user's public key. We fist concentrate on the commitment scheme C1, which is used to create the fixed and updatable commitment fix, upd of a wallet, and the NIZK proofs P1, P2 and P3 in IssueWallet, Deposit and Disburse, resp.

A close look at the simulators for operator and user security, especially how they setup the CRS (cp. Figs. 8.2 and 8.19), shows that


With respect to the second item, we note that C1 allows to be setup for equivocation, but this property is not needed in the proof. With respect to the last item, we stress that fix and upd are neither unveiled in IssueWallet nor in Deposit, but only homomorphically modified. Although the balance is unveiled in Deposit, the commitment upd itself is not opened but only unveiled indirectly by means of a NIZK proof that shows that the user knows a consistent opening information. These both observation in combination with the third item from the list allow the following solution under the assumption that C1 is replaced by an extractable scheme.

For a full-fledged corruption model, the simulator setups the NIZK schemes P1, P2 and P3 for simulation and the commitment scheme C1 for extraction. When the simulator needs to simulate a message of an honest user towards a corrupted PoS/operator, it commits to some random value (which never needs to be opened) and simulates a proof exactly the same way as currently done in the case for user security. If the simulator plays an honest PoS (or operator) in an interaction with a corrupted user, the simulator extracts the user's secrets from the commitments (instead from the proof as it is done now in case of operator security) and inputs the extracted values into Fapc.

We stress that this modified proof strategy does not need any modifications on the protocol level. But it rules out shrinking commitments, because these go along with an informationtheoretic loss and thus cannot be extractable. Hence, full UC security could be traded against a little less efficiency.

We now consider the commitment scheme C2 for the prove-participation tags. Here, the same trick of an indirect opening can be applied. To this end, only the realization of the task ProveParticipation needs to be modified. The user does not unveil *pk*<sup>U</sup> directly and thereby shows that it contains *pk*<sup>U</sup> , but instead the user sends a NIZK proof to the violation enforcer and thereby demonstrates that *pk*<sup>U</sup> could correctly be unveiled. Note that this already happens in the context of Deposit when the (anonymous) user proves to the PoS that *pk*<sup>U</sup> contains the correct value. The modification of ProveParticipation is cheap, as the proof is small and ProveParticipation is not time critical. Then C2 can be put into extraction mode for the security proof. For corrupted users the used *pk*<sup>U</sup> is extracted from *pk*<sup>U</sup> in the scope of Deposit, while for honest users the NIZK proof is simulated in the scope of ProveParticipation.

Lastly, we need to deal with the commitment scheme C4 which is used in IssueWallet and Deposit to jointly draw a random serial number by means of a Blum cointoss. At the moment, C4 is either setup for equivocation (in case of operator security) or for extraction (in case of user security) to consistently simulate the cointoss for either side. Surely, the same trick could be applied again: Instead of opening the commitment ″ ser to the share ″, the operator could send a NIZK proof and show an indirect opening. However, opposed to ProveParticipation this is computationally prohibitive as Deposit is a time critical task. But a much easier solution

is possible. If the interface of Fapc is changed as outlined in Section 10.1.1 such that the serial number is removed from the input/output, then the necessity to simulate a consistent Blum cointoss is remedied. Instead, a random commitment could be used to simulate the cointoss blindly.

In summary, full simulatability under arbitrary corruption is possible in exchange for a different (less efficient, non-shrinking, extractable) instantiation of C1 and a minor modification of ProveParticipation.

**An Open Problem** The unmodified poof as presented in Chapter 8 and especially in Section 8.4 uses the NIZK scheme to assert that corrupted users indeed know their committed secrets, i.e. the NIZK proofs are proofs of knowledge. The proposed modification moves this attestation from the NIZK proofs to the commitments and thus allows to prove full-fledged security using a different proof strategy. However, the modification does not affect the NIZK scheme at all. Especially an adversary does not gain any further capabilities how to create NIZK proofs and cannot (and must not) know whether the NIZK scheme is running in extraction mode or not. Hence, from the adversary's perspective it does not make any difference, if the true value is extracted from the NIZK proof (given the prerequisite that it the CRS is setup this way) or if the true value is extracted from the commitment scheme.

Let's express the idea differently: In theory, a non-extractable (and shrinking) commitment scheme might quash security, because the adversary might be able to find a way to send commitments whose message is not known by the adversary itself at the time when the commitment is sent, e.g. the adversary could try to blindly forward a commitment from another message and get away with it. However, this kind of attack is ruled out by the zeroknowledge proof of knowledge. Hence, as long as the NIZK scheme has the knowledge property, switching the commitment scheme between a non-extractable (and shrinking) or an extractable commitment scheme does not open up room for attacks. However, the adversary does not know, if the CRS of the NIZK is setup for extraction.

This leads to the conjecture that the inability to prove the unmodified protocol P5C secure under arbitrary corruption is not a real insecurity of the scheme, but a formal problem of the proof strategy. Hence, it is tempting to assume that P5C is also secure under arbitrary corruption using shrinking (non-extractable) commitments. Finding an adequate proof strategy seems interesting.

## **10.3 Summary and Future Work**

In this thesis we have formalized the concept of anonymous point collection as an abstract building block. The proposed definition does not only heavily extend the functional requirements of

such a building block over previous approaches and thereby widens the practical applicability, but also is—to the best of our knowledge—the first one that provides a comprehensive definition as an ideal functionality in the UC framework and thereby treats correctness, security and privacy in an integrated way. A realization has been constructed (in pseudo-code) and formally been proven secure. Again, to the best of our knowledge, the rigorous and thorough security analysis of our building block is the first one in its area, i.e. among comparable proposed building block which target similar scenarios. Along that way a lot of technical subtleties had to be considered to eventually find a definition of security that is not overly idealized and thus cannot be realized on the one hand, but still captures a meaningful notion of security and is not too weak on the other hand, while allowing for a practically efficient realization at the same time. The resulting building block is the first one that


Moreover, the proposed construction has been actually implemented on real-world hardware to document its efficiency for practical deployment. Here, a challenging task has been to select the right set of instantiations of the building blocks which could be fine-tuned to nicely interplay with each other. The last two points have to be entirely credited to the author's co-workers.

However, this thesis' contribution should not only be perceived as an improved definition and construction of an abstract building block for a specific purpose, but this thesis also demonstrates that the UC framework is the "right" method to formalize the security and privacy of complex systems. This thesis' genesis is a perfect evidence: In [Nag+17] operator security as well as user security and privacy are treated as different problems. Operator security is formalized under the game-based paradigm using a list of desired properties and an individual game per property. User security is already formalized under the simulation-based paradigm, but rather in an ad-hoc model than a rigorously defined model such as the UC framework. Especially, this ad-hoc model is not very precise on how the simulator knows which user needs to be simulated, the simulator simply "does the right thing". In [Nag+20] the system is modeled in the UC framework, but ignores the synchronization of the distributed state. Instead Nagel et al. [Nag+20] vaguely states that the tags "somehow" roam from one party to the other. Both transitions from [Nag+17] to [Nag+20] and from [Nag+20] to this thesis have unveiled flaws in the previous attempt which have turned out to be oversights and would have allowed for real-world attacks.

This thesis also has shown how the classical game-based approach that uses a list of individual objectives can be combined with the simulation-based paradigm. Surely, a list of individual security properties (as in the game-based approach) tends to be more appealing as each of the security games is usually connected to a desired objective while an ideal functionality (for a complex system) tends to deprive itself from an immediate interpretation.¹ But, the gamebased approach has the inherent problem that it remains unclear when the list of properties is complete. In other words, each of the security games rules out a certain attack (e.g. claiming a wrong balance), but there is no guarantee what else may go wrong beyond that list. En contraire, the simulation-based approach is very good at making explicit what cannot be achieved. Starting with an overly ideal functionality more and more "backdoors" for the simulator are incorporated until it becomes realizable. For each backdoor there must either be a justification why it is inherent to the problem and thus cannot be avoided or a better realization must be contrived. This thesis gives an example how both methodologies can be combined for a complex system: At first a list of desired objectives is compiled, but then an ideal functionality needs to be defined. Instead of showing that a particular realization fulfills each objective by means of an individual security game, one shows that the ideal functionality fulfills the objective as done in this thesis in Sections 5.1 and 5.2.

The same approach also lends itself for a privacy analysis of a complex system as shown in Section 5.3. Instead of analyzing the privacy for a concrete (cryptographic) realization and a concrete dataset (for a particular deployment), the privacy should be analyzed using the ideal model. The ideal functionality abstracts away the cryptographic complexity and "pulls it out of the equation".

Although this thesis has (hopefully) contributed to the question how the security of complex systems can be captured, it has unveiled two problems which we deem interesting for further (long-term) research. Anonymous point collection might be a complex system from the usual cryptographic perspective compared to much simpler primitives like encryption, signatures, commitments and so on, but is by far not a complex system from the perspective of IT security (or even general software engineering) which deal with much larger systems. Nonetheless, already for this middle-size systems UC proofs become cumbersome and tedious, which might also explain why rigorous formal treatment at the same level of granularity is less common in the IT security domain. In the author's personal opinion, two problems need to be overcome to remedy this issue: (1) The UC framework needs to be relaxed (or extended) such that modular proofs are not only a theoretical promise, but actually possible in a way that reflects

¹ Indeed, one of the (anonymous) reviewer of [Nag+20] declared to feel more confident about the security of the scheme, if there were individual security games instead of a single ideal functionality, because the functionality were a complex protocol on its own and it was hard to tell what security it provides.

the way how existing building blocks are combined in practice (cp. Section 5.4.3). (2) We require tools that allow computer-aided, automatic proofs of indistinguishability between ideal functionalities and their realization. Automated reasoning about security properties has gained much attention in the IT security field. However, existing tools (e.g. ProVerif, CryptoVerif, etc.) are usually very good at showing that given a certain set of pre-conditions the execution of some code fulfills some post-conditions and thus are very close to the game-based approach [Bla+18].

## **Notation**





# **Bibliography**


[ven19] ventopay GmbH. *ventopay Customized Payment Systems*. 2019. url: https : / / ventopay.com/.

# **List of Tables**


# **List of Figures**




## **List of Theorems**




## **Own Publications**


In numerous user-centric, cyber-physical systems, point collection and redemption mechanisms are a core component. Loosely speaking, this component may be viewed as personal "piggy bank" that allows users to deposit and disburse points. Depending on the context, points might be interpreted in numerous ways: monetary units, loyalty rating points, reliability credits, etc.

Applications which are currently deployed in practice do not provide anonymity for the users. In the literature, several privacy-preserving solutions have been proposed. However, these proposals typically target specifi c scenarios, but do not consider anonymous point collection as a generic, multi-purpose building block.

This work is a comprehensive, formal treatment of anonymous point collection. The proposed defi nition does not only provide a strong notion of security and privacy, but also covers features which are important for practical use. An effi cient realization is presented and proven to fulfi ll the proposed defi nition. The resulting building block is the fi rst one that allows for anonymous two-way transactions, has semi-offl ine capabilities, yields constant storage size, and is provably secure.

Anonymous Point Collection – Improved Models and Security Defi nitions

Matthias Heinrich Nagel