< Back Home

Private Set Intersection from Homomorphic Encryption: A Python Implementation

Check out our Private Set Intersection (PSI) implementation in Python here!

In this blog post, we will first motivate our interest in PSI, by providing a list of applications: password checkup, private contact discovery for Whatsapp or Signal, measuring ads efficiency privately or DNA pattern matching. Secondly, we will show how to build a PSI protocol using a HE encryption scheme. Thirdly, we will describe our Python implementation of a specific PSI protocol.

Our implementation is based on the protocol described in this paper and its follow-up. This protocol uses Homomorphic Encryption (HE), a powerful cryptographic primitive which allows performing computations on encrypted data in such a way that only the secret key holder has access to the decryption of the result of these computations. If you are curious about HE, check out our previous blog post! Our implementation uses the BFV Brakerski-Fan-Vercauteren HE scheme from the TenSEAL library. You can also check out a concurrent SEAL-based C++ implementation of the same protocol that has been recently published by Microsoft.

Disclaimer: Our implementation is not meant for production use. Use it at your own risk.

1. Private Set Intersection (PSI)

Private Set Intersection (PSI) is an interactive protocol between a client and a server. The client holds a set of items YY and the server holds a set of items XX. By the end of the protocol, the client learns the intersection of XX and YY and nothing else about the server's set, while the server learns nothing about the client's data.

Motivation

Imagine that a WhatsApp client would like to check who among his phone contacts also uses WhatsApp. To get this information, he could send his entire contact list to the WhatsApp server and the server could answer him back with the list of his contacts who use the app. In this way, the server learns the client's list of contacts ⚠️. The client may not be happy about this because he values his privacy. Ideally, a client would like to privately learn who, among his contacts, uses the app, without revealing his entire list to the WhatsApp server. Here is the place where Private Set Intersection comes to the rescue.

What is PSI?

Suppose Alice has a list of friends A and Bob has a list of friends B. Alice is interested in finding out their mutual friends and Bob is ok to share this information with her. Of course, Alice would easily find this out if Bob gave her his list. Or, Alice could send her list to Bob and Bob could answer her back with the information that she is interested in. But both Alice and Bob value their privacy and neither of them wants to reveal his/her entire list of friends to the other.

PSI is an interactive cryptographic protocol which allows Alice to find out the friends that she has in common with Bob and nothing else about the rest of Bob's friends and prevents Bob from learning any information about Alice's list of friends. As you can see now, a PSI protocol is everything we need to solve the Whatsapp problem that we have described in the last section.

Applications of PSI

PSI may be used in many practical applications:

  1. 🔑 Password Checkup: A client wants to check if his credentials are leaked somewhere on the internet. PSI allows the client to query a database of leaked passwords, without revealing his actual password to the database owner.

  2. 🔬 DNA private matching: A client gets his DNA sequenced and wants to find out if his sequence is related to some genetic disease. PSI allows the client to query a database of disease-linked sequences, without revealing his DNA sequence.

  3. 🚗 Measuring ads efficiency: A car company owners want to measure how efficient are the ads for which they have paid. PSI allows the company to find out who among its clients have seen a specific ad on social media (on Facebook for instance), while the car company keeps its list of clients private and Facebook also keeps its list of users private.

2. PSI from HE

One of the first PSI protocols in the literature was proposed by Meadows in this paper and uses as building block the Diffie-Hellman exchange protocol. As we said, in this blog post, we focus on a PSI protocol that uses HE. The HE approach works really well in the asymmetric case (i.e. the case where the size of the server's set is much larger than the size of the client's set), with especially great results in terms of communication costs. As you can imagine, this is exactly the case we are interested in: the WhatsApp's database is much larger than the database of a particular client.

Short recap on HE

Let's recap a bit what HE is, first. As nicely explained here 😉, this cryptographic primitive allows you to compute on encrypted data.

In our implementation we use the BFV HE scheme. This scheme encodes plaintexts as polynomials of degree less than poly_modulus_degree (which is a power of 2) with integer coefficients modulo plain_modulus. Because of the homomorphic properties of the scheme, for any plaintexts m1,m2,mm_1, m_2, m and pp, we have that:

Enc(m1)+Enc(m2)Enc(m1+m2)Enc(m1)Enc(m2)Enc(m1m2)p+Enc(m)Enc(p+m)pEnc(m)Enc(pm),\begin{aligned} \mathsf{Enc}(m_1) + \mathsf{Enc}(m_2) &\equiv \mathsf{Enc}(m_1 + m_2)\\ \mathsf{Enc}(m_1) \cdot \mathsf{Enc}(m_2) &\equiv \mathsf{Enc}(m_1 \cdot m_2)\\ p+\mathsf{Enc}(m) &\equiv \mathsf{Enc}(p+ m)\\ p\cdot\mathsf{Enc}(m) &\equiv \mathsf{Enc}(p\cdot m) \end{aligned},

Since using BFV we can perform additions and multiplications homomorphically on ciphertexts, then we can also evaluate polynomials homomorphically. So cool, no? Still, keep in mind that the parameters of the scheme usually allow performing only a limited number of multiplications! We will get back to this observation soon.

How to get PSI from HE

In the rest of this section, we will show you the intuition behind building a PSI protocol from a HE scheme. We will build this intuition step by step, by starting with very particular cases.

Case a. Alice has just one friend yy and Bob also has just one friend xx (it's unlikely, but who knows, maybe they are both extremely selective 🧐 when it comes to friendships).

💡 Let's see how they may use the magic of HE: Alice encrypts her yy using a HE scheme, gets Enc(y)\mathsf{Enc}(y) and sends it to Bob. Now Bob subtracts his xx from Enc(y)\mathsf{Enc}(y) and multiplies this result by a random nonzero integer rr. Due to the 💪 of HE, the result of Bob's computation is actually an encryption of r(yx)r\cdot(y-x). He sends Enc(r(yx))\mathsf{Enc}(r\cdot(y-x)) back to Alice. Alice now decrypts this element using her secret key and she checks whether she gets a 00 or not. If she gets a 00, her friend yy is the same as Bob's friend xx. Otherwise, their friends are different.

❗ For the ease of understanding, throughout this section, we will assume that both the plaintexts (xx, yy, etc.) and the ciphertexts produced by the HE scheme are integers.

🤔 Why does it work? When she decrypts, Alice recovers the product r(yx)r\cdot(y-x). If this product is 00, it means that one of its terms (rr or yxy-x) is 00. Since rr is chosen to be nonzero, it automatically means that yx=0y-x=0 and Alice will know that her friend yy is also Bob's friend.

If the product is not 00, it means that xx and yy are for sure different. Moreover, if we assume that the HE scheme has "circuit privacy" (in other words, if we assume that Alice may never find out information about the random element rr used by Bob), then the value r(yx)r\cdot(y-x) recovered by Alice will look random to her and she will never learn any information about Bob's xx.

Case b. Alice still has one friend yy, but Bob has many friends x0,...,xnx_0,...,x_n.

Let's assume that Bob has 10 friends (n=9n=9). He chooses a random nonzero integer rr and associates to his list of friends the monic polynomial

P(X)=r(Xx0)(Xx9).P(X) = r\cdot (X-x_0) \cdot \ldots \cdot (X-x_9).

❗ Keep in mind that PP vanishes at xx if and only if xx belongs to the set {x0,,x9}\{x_0,\ldots,x_9\}.

💡 Here is the place where HE comes into play: Alice encrypts her element yy with a HE scheme, gets Enc(y)\mathsf{Enc}(y) and sends it to Bob. Bob now evaluates his secret polynomial PP at Enc(y)\mathsf{Enc}(y) and sends the result back to Alice. Because of the homomorphic properties of the encryption scheme, P(Enc(y))Enc(P(y))P(\mathsf{Enc}(y)) \equiv \mathsf{Enc}(P(y)), which means that Bob actually sends to Alice an encryption of P(y)P(y). When Alice decrypts this value, she checks if it is equal to 00 or not. If it is 00, it means that her friend yy is also a friend of Bob. Otherwise, yy is not among Bob's friends.

🤔 Why does it work? When she decrypts, Alice recovers the product r(yx0)(yx9)r\cdot(y-x_0)\cdot \ldots \cdot(y-x_9). As in Case a, this product is 00 if and only if yy is one of Bob's friends.

Case c. Alice has many friends, Bob has many friends too.

This is the more general case. At this point you may already have an idea 🤓 on how to solve it! Indeed, Alice and Bob could just repeat the protocol from Case b for every friend of Alice! So easy, right?

Still, at this moment you already ask yourself:

  1. What happens if Alice has a lot of friends?

In this case, the communication size will grow dramatically since you should repeat the protocol from Case b for every friend of Alice. In conclusion, this trivial solution may not be efficient in practice.

  1. What happens if Bob has a lot of friends?

In this case, Bob has to evaluate homomorphically a polynomial of a very large degree. But this may be impossible, since a typical homomorphic encryption scheme allows performing homomorphically only a limited number of multiplications. The reason for this is not hard to understand, read about the noise growing in HE here!

Towards a more practical PSI protocol. In the next section, we will show you how to tackle the above two problems. Concretely, we will describe a range of optimizations that we can apply in order to make the above solution work in practice.

3. Implementing the PSI protocol

Before giving more details about the actual implementation, we are going to discuss some of the ideas that enable the basic protocol (Case c) to become practical. We will also give an intuition on how HE and Oblivious Pseudorandom Functions (OPRFs) provide privacy for both the client and the server.

Optimizations for a practical protocol ⚙️

Suppose the server holds a set of items XX and the client has a much smaller set YY. In the basic protocol (Case c), the client has to send an individual encryption for each element in the set Y={y0,y1,y2,}Y=\{y_0,y_1,y_2,\ldots\}. This means that the communication cost is proportional to Y|Y| and we write this as O(Y)O(|Y|). The server has to homomorphically evaluate the polynomial PP at each ciphertext. This implies a computational cost proportional to Y×Homomorphic.Evaluation(P)|Y|\times \mathsf{Homomorphic.Evaluation}(P). For simplicity let's assume that the computational cost of evaluating the polynomial PP is proportional to its degree, which is just the size of the server set X|X|. So we can simply say that the computational cost of the server is O(XY)O(|X|\cdot |Y|).

Batching

The batching technique in Homomorphic Encryption is usually used to enable Single Instruction Multiple Data (SIMD) operations on ciphertexts. Broadly speaking, this allows the client to 'batch' NN items from the set YY into a single ciphertext and the server can simultaneously do computations on the encrypted items. For the PSI protocol, it means that batching can reduce the client to server communication as well as the computational cost of the server.

In the above diagram you can see how batching works for multiplication. But it works similarly for addition or any homomorphic operation supported by the scheme.

Notice that running the basic protocol using a HE scheme that supports batching, the client is able to send fewer ciphertexts Y/N|Y|/N and the computational cost of the server drops as well, proportionally to XY/N|X|\cdot |Y| /N because the server can homomorphically evaluate the polynomial PP at all the NN inputs simultaneously.

Conveniently, the HE scheme BFV supports batching and the TenSEAL library makes it very easy to take advantage of it. In fact, when using TenSEAL, you don't need to know how batching actually works under the hood (it uses the CRT representation), as the implementation does the batching by default (plaintexts are represented as vectors of integers and homomorphic operations act component-wise on the plaintexts).

Hashing

Remember that the HE scheme can't support too many multiplications. Concretely, the scheme correctly works only when the homomorphic computation is expressed as a circuit (additions/multiplications gates) with 'small' multiplication depth. To be more precise, in our case, the multiplicative depth corresponding to the parameters of the Brakerski-Fan-Vercauteren scheme implemented in TenSEAL is 33 or 44. So, for the rest of the blog post, we will assume that the HE scheme supports only computations of multiplicative depth 3\leq 3. You have below two examples for which decryption works correctly.

This is a major problem if we want to use the basic protocol because the server is limited to homomorphic evaluations of polynomials of degree at most 88. So the set of the server can't be larger than 88. This is not really satisfying as we would like the server to have millions of items 🤯. So how to increase the number of items on the server side? 🤔

The first thing one can do is to partition the set of the server X=X0X1Xm1X=X_{0} \cup X_{1} \cup \ldots \cup X_{m-1} and the set of the client Y=Y0Y1Ym1Y=Y_0\cup Y_1 \cup \ldots \cup Y_{m-1} into 'bins', using some agreed upon hash function h:U{0,1,2,,m1}h:\mathcal{U} \to \{0,1,2,\ldots,m-1\}, with both sets X,YX,Y contained in U\mathcal{U}. Items fall into the same 'bin' if they hash to the same value.

Now we can run the basic protocol (Case c) mm times on the pairs of sets (Xi,Yi)(X_i ,Y_i) to find the intersection of XX and YY. Notice that this optimization can dramatically reduce the computational cost of the server from XY|X|\cdot |Y| down to i=0m1XiYi\displaystyle \sum_{i=0}^{m-1} |X_i|\cdot |Y_i|. Moreover, this strategy also allows for a much larger server set (X8|X| \gg 8), because the server runs the basic protocol on the smaller sets XiX_i. So it's enough to have Xi8|X_i|\leq 8 for all bins to be able to intersect XYX\cap Y in this way.

For security reasons, the client must pad its bins with some dummy messages to hide the cardinality of each bin, before engaging in the interaction with the server. By doing the padding, all the client's bins have the same number of items, let's call this number BYB_Y (notice that Y<mBY|Y| < m\cdot B_Y). For example, for X=Y={0,1,,m1}X=Y=\{0,1,\ldots,m-1\} and a uniformly random 'hash' function h:{0,1,2,,m1}{0,1,2,,m1}h:\{0,1,2,\ldots,m-1\} \to \{0,1,2,\ldots,m-1\}, we have BY=O(logm)B_Y=O(\log m) with high probability (see the Balls and Bins Problem). In this setting, the computational cost of the server is now O(mlogm)O(m\log m) and the client to server communication is O(mlogm)O(m\log m).

❗ Using batching, both costs go down by a factor of NN. To keep the exposition simple we ignore that for now. But keep in mind that we can always use it to get better performance.

To get a feel for why the padding is necessary, suppose that the bin Y3Y_3 is empty and the client doesn't pad it. This means that nothing is sent corresponding to this bin and the server will be able to learn that the client doesn't hold any elements that hash to 33. This is valuable information that the server should not be able to learn from the execution of the protocol😟.

In general, if we do the padding on the client side, the actual computational cost of the server is proportional to i=0m1XiBY\sum_{i=0}^{m-1} |X_i|\cdot B_Y and the communication cost is actually proportional to i=0m1BY=mBY\sum_{i=0}^{m-1} B_Y= m\cdot B_Y.

To minimize the costs, we would like some kind of hash function (table) with a range {0,1,,m1}\{0,1,\ldots,m-1\} as small as possible but sufficiently large to map the set of the client with few collisions as possible. A good candidate is the so-called Cuckoo Hash table 🐦, which works for Y23m|Y|\approx \frac{2}{3} \cdot m and guarantees that there are no collisions (i.e. BY=1B_Y=1) when hashing the client set YY. For X=m|X|=m and Y=2/3m|Y|=2/3\cdot m, using Cuckoo hash tables, the computational cost and communication drop to O(m)O(m), compared to the similar parameters of the previous example.

In the actual protocol only the client uses Cuckoo hash tables. The server has to compute simple hash values that are compatible with the Cuckoo hash table structure.

Windowing

Let's suppose that the set XX of the server is partitioned into many bins X=X0X1Xm1X=X_{0} \cup X_{1} \cup \ldots \cup X_{m-1}. We denote the maximum bin capacity (load) by BX=max0i<mXiB_X=\displaystyle \max_{0\leq i <m} |X_i|. We can currently run the protocol only when the parameters are chosen such that the maximum bin capacity is less than 88. But the bound 88 can actually be increased for the same multiplicative depth 💥! This can be done by increasing the communication cost from the client to the server by the logarithmic factor logBX\log B_X.

💡The idea is the following. The server needs to homomorphically evaluate the polynomials corresponding to each bin XiX_i. This means that the server must be able to do homomorphic evaluations of polynomials of degree BXB_X. Let's denote by n=logBXn=\lfloor \log B_X \rfloor. To intersect with one item Y={y}Y = \{y\}, the client encrypts the powers {y,y2,y22,y23,,y2n}\{y,y^2,y^{2^2},y^{2^3},\ldots, y^{2^{n}}\} and sends them to the server.

Now, for any power kBXk\leq B_X the server uses the binary representation of kk: k=k0+k121+k222++kn2nk=k_0+k_1\cdot 2^{1} + k_2\cdot 2^{2} + \cdots + k_n\cdot 2^n, with ki{0,1}k_i \in \{0,1\}, to compute the encryption of yky^k as yk=yk0(y21)k1(y22)k2(y23)k3(y2n)kny^k=y^{k_0} \cdot \left( y^{2^1} \right)^{k_1}\cdot \left( y^{2^2}\right)^{k_2} \cdot \left(y^{2^3} \right)^{k_3}\cdot\ldots \cdot \left(y^{2^n}\right)^{k_n} This computation can be done in multiplicative depth equal to log(n+1)\lceil\log (n+1)\rceil. By keeping the multiplicative depth the same as before, equal to 33, we can afford n7n\leq 7, which gives BX255B_X\leq 255 . This means we can run the basic protocol on each bin as long as the maximum load BXB_X satisfies this bound.

❗ Windowing is compatible with Batching. Do you see why? Suppose that at some point during the protocol, the client encrypts the batch y\bf{y}=(y0,y1,,yN1)(y_0,y_1,\ldots,y_{N-1}) and sends it to the server. To also make use of the windowing technique, the client first computes all the powers {yi,yi2,yi4,,yi2n}\{y_i,y_i^2,y_i^4,\ldots, y_i^{2^n}\} needed for windowing and applies batching as below. Then the client sends all the ciphertexts to the server, as in the picture below.

Notice that the server is able to compute the encryption of yk\bf{y}^k, for any kBXk \leq B_X, by using the binary representation of kk and the ciphertexts sent by the client. In fact, the server can homomorphically evaluate its polynomial PP at y\bf{y} and send back the answer to the client. Now the client can decrypt to recover P(y)P(\bf{y}), then applies Batch_decode\texttt{Batch\_decode} to recover (P(y0),P(y1),,P(yN1))(P(y_0),P(y_1),\ldots, P(y_{N-1})).

🌻 In TenSEAL, Batch_encode\texttt{Batch\_encode} and Batch_decode\texttt{Batch\_decode} are done automatically, so we don't need to worry about them. You can simply think that when you evaluate the polynomial PP at a ciphertext that encrypts (y0,y1,,yN)(y_0,y_1,\ldots,y_N), the evaluation is done component-wise, obtaining (P(y0),P(y1),,P(yN1))(P(y_0), P(y_1),\ldots, P(y_{N-1})).

Partitioning

We have seen how to increase the size of the server's set X|X| by using Hashing techniques and Windowing. We can increase it even further, but this time we increase it by paying an extra cost 💰 for the server to client communication.

To this end, let α\alpha be a positive integer, the partitioning parameter. The idea is to further partition each bin XiX_i into α\alpha mini bins, each of size less than BX/αB_X/\alpha. We get a total number of mαm\cdot \alpha mini bins (α\alpha mini bins for each of the mm bins). Then, for each mini bin, the corresponding polynomial is computed. Therefore to answer the client's query, the server has to homomorphically evaluate mαm\cdot \alpha polynomials. Notice that the degrees of all these polynomials are less than BX/αB_X/\alpha. Now the server runs the basic protocol (+ windowing) for these mini bins. Notice that as long as BX/α255B_X /\alpha \leq 255 the protocol works. The effects of this change are:

✔️ With partitioning the protocol supports even larger server sets. The reason is that the bins XiX_i can now handle up to α255\alpha \cdot 255 items. Since the bins are much larger, the server set can now be much larger!

❌ The downside of using partitioning is that the server to client communication is increased by a factor of α\alpha, because for each mini bin the server has to send back a corresponding answer to the client.

Security 🔐

The security of the protocol prevents a potentially malicious server (one that is able to deviate from the honest protocol) from learning any information about the set of the client. Vice versa, the server's privacy is also protected against a possibly malicious client.

Intuitively, the set of the client is encrypted when it is sent to the server, so a potentially malicious server is not able to learn anything from the ciphertexts, regardless of what the server does with these encryptions.

When it comes to the privacy of the server, first recall that the server homomorphically evaluates a polynomial (closely related to its set) at the ciphertexts sent by the client. A malicious client who also has access to the randomness used for encryption and to the secret decryption key may use the result of this computation to infer information about the computation done by the server, in particular about the server's set. This is possible because the BFV homomorphic encryption scheme does not have circuit privacy, which means that the scheme itself does not hide the computation that has been carried out on a ciphertext. The privacy of the server is guaranteed by the use of Oblivious Pseudorandom Functions (OPRFs).

Oblivious PRFs

Let's consider a Pseudorandom Function (PRF) FF, that takes as input a secret key kk and some input and efficiently computes some output. As long as the key remains secret, the values that the PRF outputs 'look' completely 'random'. For intuition, you can think of the AES function.

Assume that only the server knows the secret key kk. The server precomputes the set X:={Fk(x) : for all xX}X':=\{ F_k(x) \text{ : for all } x \in X\}. Now, assume that through some magic 🔮(i.e. without having access to any information about the secret key kk), the client can compute Fk(y)F_k(y) for all yYy \in Y and further create the set Y={Fk(y) : for all yY}Y'=\{F_k(y)\text{ : for all } y \in Y \}. Now the two can engage in the PSI protocol for the sets XX' and YY' to find their intersection, from which the client can deduce XYX\cap Y.

❗ One important thing is that in the eyes 👀 of the client the set XX' does not leak any information about the server set XX, as the secret key is unknown to the client. Therefore the privacy of the server is protected against a potentially malicious client!

Oblivious PRF is an interactive protocol that replaces the above 'magic' 🔮. Namely, it allows the client to learn the PRF values Fk(y)F_k(y) (for some particular PRF), without having access to the secret key. In our implementation we use a Diffie-Hellman like OPRF protocol that is implemented using elliptic curve point additions.

The protocol

🔍 Let's take a closer look at the PSI protocol. The server holds a list of server_size items, while the client holds a list of client_size items.

Let us give you a bird's eye view on the protocol with all the steps:

In the offline phase, both the server and the client preprocess their datasets. This offline preprocessing needs to be done only once by each party. The server preprocesses its set of items and then can engage in the online phase of the protocol many times with possible many clients. The vice-versa is also possible.

  1. Oblivious Pseudorandom Function (OPRF): First, they apply an OPRF layer to their inputs. For this, the two parties engage in a Diffie-Hellman-like interactive protocol on elliptic curves. The client and the server end up having their entries encoded in a special way, using the secret key of the server, oprf_server_key. Basically, using a generator GG of an elliptic curve, an entry (which is an integer) item is first encoded as the point item\cdot GG on the elliptic curve. Then, it is further multiplied interactively by the secret integer oprf_server_key. Then, they both take sigma_max bits out of the first coordinate of each such point.

After this step, both the server and the client have new datasets, PRFed_server_set and PRFed_client_set, each of sigma_max-bit integers. Check our code for this here and here.

If you're curious how the OPRF protocol works, click here.

The OPRF protocol is as follows:

  • The server gets his PRFed_server_set by doing for each of his items the following:

server_item \rightarrow server_item\cdot GG \rightarrow oprf_server_key \cdot server_item \cdot GG \rightarrow sigma_max bits out of the first coordinate.

  • The client creates a set to send to the server by doing for each of his items the following:

client_item \rightarrow client_item\cdot GG \rightarrow oprf_client_key \cdot client_item \cdot GG.

  • The server obtains the client's points, multiplies them by oprf_server_key and sends them back to the client.

oprf_client_key \cdot client_item \cdot GG \rightarrow oprf_server_key\cdot oprf_client_key \cdot client_item \cdot GG.

  • The client gets his PRFed_client_set by doing:

oprf_server_key\cdot oprf_client_key \cdot client_item \cdot GG \rightarrow oprf_client_key1^{-1} \cdotoprf_server_key\cdot oprf_client_key \cdot client_item \cdot GG \rightarrow oprf_server_key \cdot client_item \cdot GG \rightarrow sigma_max bits out of the first coordinate


  1. Hashing: The server and the client both agree on using a total number of number_of_hashes hash functions with output_bits bits of output.
  • The client performs Cuckoo hashing. You can check the implementation here. By the end of this step, each of his bins has at most 11 element. For security reasons, empty bins are padded with one dummy message. You may think of his database as a column vector with number_of_bins rows.
  • The server performs Simple hashing; each of his bins has bin_capacity elements. Think of its database as a matrix with number_of_bins rows and bin_capacity columns. Roughly speaking, for each element and each hash function, the server inserts the element in a bin corresponding to its hash value. You can check its implementation here.

Recall that after the OPRF step, the database entries of both the client and the server have sigma_max bits. In order to reduce the storage of the items in the hash tables, both the client and the server perform a permutation-based hashing step. In this way, they reduce the length of the items to sigma_max - output_bits + log(\lfloor \mathsf{log}(number_of_hashes)+1)\rfloor+1 bits, by encoding a part of the item in the index of the bin where the item belongs.

❗ The bins correspond to all the possible values that the hash functions can take. This means that number_of_bins = 2 ** output_bits.

Click here to see how the elements are inserted in their hashing tables.

We describe here shortly the steps for their insertion:

  • We write item as item_left||item_right, where item_right has output_bits bits and item_left has sigma_max - output_bits bits.
  • For a specific index ii less than number_of_hashes, we compute the so-called location function Loc\mathsf{Loc}(i,i, item):

item \rightarrow item_left||item_right \rightarrow Loc\mathsf{Loc}(i,i, item) = HiH_i(item_left) \oplus item_right.

  • We insert item in a bin corresponding to its location as item_left||i, which has sigma_max - output_bits + log\lfloor \mathsf{log}(number_of_hashes)\rfloor + 1 bits.

Notice that given item_left||i and the location, Loc\mathsf{Loc}(i,i, item), you can recover the initial item,, by computing item_right = Loc\mathsf{Loc}(i,i, item) \oplus HiH_i(item_left).

Why does it work? Let's say that the server has stored xx as (xLi)(x_L || i) in a bin, whereas the client has stored yy as (yLj)(y_L || j) in the same bin. Then, (xLi)=(yLj)(x_L || i) = (y_L || j) leads to xL=yLx_L = y_L and i=ji = j and hence, x=yx = y.


❗ One solution to avoid hashing failures in Cuckoo hashing is to use at least 33 hash functions for inserting items into the Cuckoo table. While increasing the number of hashses above 33 may reduce the hashing failures, this will significantly increase the simple hashing cost of the server. Hence, a good choice seems to be number_of_hashes = 33.

❗ For number_of_hashes = 33, we can insert the client's items into the Cuckoo hash table with a failure probability of 2402^{-40}, in a total number of bins equal to number_of_bins= 3/23/2 \cdot client_size . For more details, check Hashing failures here.

❗ For number_of_hashes = 33, we should choose bin_capacity carefully so that inserting number_of_hashes \cdot server_size items via simple hashing succeeds with a failure probability of 2302^{-30}. For example, for server_size = 220,2^{20}, we must pick bin_capacity = 536536. To see how to choose the bin_capacity parameter as a function of the security parameter, the output size of the hash functions and the server's maximum set size , check Table 1 or you can also run the bin_capacity_estimator.py script to obtain more values.

Example: Let's take bin_capacity = 66. Throughout this blog post, we will show the bins as rows: the server's bins contain items represented as red balls, whereas the client's bins contain items represented as green balls.🎨

💡 A padding step with dummy messages might be required to get all the bins of the server full. Server padding is important for batching, as you will see soon.

Hence, the server can run the PSI protocol on each bin. As discussed above, the server can associate to each bin a polynomial and evaluate it at an encrypted query. The degree of the polynomial is in this case bin_capacity, the number of elements in each of the server's bins. But this degree can be pretty big and the encryption scheme that we use may not allow us to correctly compute the polynomial evaluation. So we partition the bins into mini bins and associate to each mini bin a polynomial of a lower degree.

  1. Partitioning: The server partitions each bin into alpha mini bins, having bin_capacity/alpha elements.

❗ There will be number_of_bins \cdot alpha mini bins.

Hence, performing the PSI protocol for each bin reduces to performing alpha PSI protocols for each mini bin.

Example: Let's assume that each bin is partitioned into alpha = 33 mini bins. In this case, each mini bin has 22 elements, as it is shown in the picture.

  1. Finding the polynomials: For each mini bin, the server computes the coefficients of the monic polynomial that vanishes at the elements of that mini bin, see this. This means that each mini bin can now be described by bin_capacity/alpha +1 coefficients. By convention, the coefficients are written in decreasing order, namely the first column in the mini bin corresponds to the leading coefficient of the polynomial, the second column to the next coefficient and so on. You can see how both finding the polynomials and partitioning are implemented on the server side here.

The coefficients of the polynomials are computed modulo plain_modulus, a parameter involved in the homomorphic encryption scheme. More on this parameter will follow soon.

Example: In our case, each mini bin has 22 elements, which means that it can be represented by the coefficients of a monic polynomial P(X)=X2+a1X+a0P(X) = X^2 + a_1X + a_0 of degree 22, i.e. by [1,a1,a0][1, a_1, a_0].

At this step, both the server and the client perform the actual PSI protocol, as in Case b, but with the optimizations described above.

So far, we said that the PSI protocol can be performed per each bin of client, against a corresponding mini bin of server. But we can do better 💪.

  1. Batching: The HE scheme that we use benefits from a special property, called batching, that helps the client to pack his bins and to perform many queries simultaneously.

Recall that the scheme BFV allows encrypting polynomials of degree less than poly_modulus_degree modulo plain_modulus. The plain_modulus is chosen so that it is a prime congruent with 1 modulo 2 \cdot poly_modulus_degree, which helps identifying each such polynomial with a vector of poly_modulus_degree integer entries modulo plain_modulus. This is what batching is about. TenSEAL allows encryption of vectors of integers, by first performing the above correspondence and then performing the actual encryption. Also, in a similar way, decrypting in TenSEAL works for vectors of integers.

If you're interested in finding out more about this encoding, click here.

The plaintext space is the ring of integer polynomials Zt[X](XN+1),\frac{\mathbb{Z_t}[X]}{(X^N+1)}, where tt = plain_modulus and NN = poly_modulus_degree.

💡Since t1t \equiv 1 mod 2N2\cdot N, the polynomial XN+1X^N+1 factorizes as a product of linear polynomials modulo tt, i.e.

XN+1=(Xu1)(Xu2)(XuN) mod t.X^N+1 = (X-u_1) \cdot (X-u_2)\cdot \ldots \cdot (X-u_N) \text{ mod } t.

Therefore, there is a ring isomorphism, given by the Chinese Remainder Theorem (CRT):

Zt[X](XN+1)ZtN,\frac{\mathbb{Z}_t[X]}{(X^N+1)} \rightarrow \mathbb{Z}_t^{N},

where a polynomial pp is encoded as [p(u1),p(u2),,p(uN)].[p(u_1), p(u_2), \ldots, p(u_N)].

This map behaves nicely with respect to addition and multiplication: addition of polynomials corresponds to addition of vectors, whereas multiplication of polynomials corresponds to component-wise multiplication of vectors.

Example: If NN = poly_modulus_degree = 2 and t1t\equiv 1 mod 44, then XN+1=(Xu1)(Xu2)X^N+1 = (X-u_1) \cdot (X-u_2) mod tt. In this case, a polynomial pp corresponds to the vector of integers [p(u1),p(u2)][p(u_1), p(u_2)] modulo tt.


  • The client batches his bins (having each 1 integer entry) into number_of_bins/poly_modulus_degree vectors.
  • The client encodes each such batch as a plaintext.
  • The client encrypts these plaintexts and sends them to the server.
  • The server batches his minibins in minibatches.

Example: For simplicity of exposition, let's take poly_modulus_degree = 22. This means that the client batches together vectors of 22 elements. The server matches this on his side, by batching together 22 mini bins into a mini batch.

In our implementation, we choose number_of_bins = poly_modulus_degree, so all the client's bins are batched into just one plaintext.

🔍 The PSI protocol is now performed on a client's batch and on one of the corresponding server's mini batches.

❗ In general there are alpha \cdot number_of_bins/poly_modulus_degree mini batches so the PSI protocol si apllied the same number of times.

Batching lowers the communication and computation costs on the client side.

⚠️ Before going into the next step, windowing, let's take a deep breath now and recall a bit what we have so far. You can forget about batching for a minute. Each mini bin has bin_capacity/alpha elements, so it is represented by a monic polynomial of degreebin_capacity/alpha. Let's call this degree shortly as DD.

Therefore, each polynomial looks like

P(X)=XD+aD1XD1++a1X+a0.P(X) = X^D + a_{D-1}X^{D-1}+\ldots+a_{1}X+a_0.

for some integer coefficients aia_i.

As in Case b, the client should send an encryption Enc(y)\mathsf{Enc}(y) of one bin containing an integer yy, and the server should evaluate the polynomial P(X)P(X) at this encrypted value.

Performing this evaluation requires, in particular, to compute Enc(y)DEnc(yD)\mathsf{Enc}(y)^D \equiv \mathsf{Enc}(y^D), which can be done in multiplicative depth log(D)\lceil \mathsf{log}(D)\rceil. But this multiplicative depth may be too large to be handled by the HE scheme. In this case, the client may come to the rescue and send the encryptions of all powers:

Enc(y),Enc(y2),Enc(y3),,Enc(yD)\mathsf{Enc}(y), \mathsf{Enc}(y^2), \mathsf{Enc}(y^3),\ldots, \mathsf{Enc}(y^D)

So by using the magic powers of the HE scheme 🔮, the evaluation of the polynomial turns out to be just a scalar product of these encryptions with the vector of coefficients of the polynomial:

P(Enc(y))=(1,aD1,,a0),(Enc(yD),Enc(yD1),,Enc(1)).P(\mathsf{Enc}(y)) = \langle (1, a_{D-1},\ldots,a_0), (\mathsf{Enc}(y^D), \mathsf{Enc}(y^{D-1}),\ldots,\mathsf{Enc}(1)) \rangle.

Still, the client needs to send DD encryptions (per bin), which means a lot of communication on the client side 😱.

We need to look for a tradeoff ⚖️. Notice that if the client sends the log(D)+1\lfloor\mathsf{log}(D)\rfloor+1 encryptions

Enc(y),Enc(y2),Enc(y4),Enc(y2log(D)),\mathsf{Enc}(y), \mathsf{Enc}(y^2), \mathsf{Enc}(y^4) \ldots, \mathsf{Enc}(y^{2^{\lfloor\mathsf{log}(D)\rfloor}}),

then each power Enc(y)eEnc(ye)\mathsf{Enc}(y)^e \equiv \mathsf{Enc}(y^e), for any eDe\leq D, can be computed by writing ee in binary decomposition. The server recovers any Enc(ye)\mathsf{Enc}(y^e), in the worst case, by multiplying all the log(D)+1\lfloor\mathsf{log}(D)\rfloor + 1 (encrypted) powers, which requires a circuit of multiplicative depth log(log(D)+1)\lceil\mathsf{log}(\lfloor\mathsf{log}(D)\rfloor +1)\rceil.

So yay! we lowered the initial multiplicative depth of the polynomial, from log(D)\lceil\mathsf{log}(D)\rceil, to log(log(D)+1)\lceil\mathsf{log}(\lfloor\mathsf{log}(D)\rfloor +1)\rceil, while incurring a only a small logD\log D factor communication cost. It turns out that we can do this even better: we can lower the depth to log(log(D)/+1),\lceil\mathsf{log}(\lfloor\mathsf{log}(D)\rfloor/\ell+1)\rceil, for some so-called windowing parameter \ell if we are willing to pay some extra communication cost.

Now it's time to come back to the protocol and tell you how this idea is used, combined with batching:

  1. Windowing: It lowers the depth of the arithmetic circuit above (i.e. of the polynomial of degree DD). The client sends sufficiently many (encrypted) powers so that the server can recover all the missing powers, (i.e. the ones of exponent less than DD) by computing circuits of low multiplicative depth. We will generalize a bit the discussion above, but don't worry, we'll take it step-by-step.

As you have seen, if the client sends the encryptions Enc(yk)\mathsf{Enc}(y^{k}), for kk a power of 22, the server can recover any missing power. This can be done by writing the exponent in base 22 and by multiplying the received encrypted powers accordingly. But we can think of writing the exponent in a bigger base, let's say 22^\ell, for some parameter \ell (which is ell, in our implementation). Each term that appears in such a decomposition is of type i2ji\cdot 2^{\ell j}, where 0i210 \leq i \leq 2^{\ell}-1 and 0jlog(D)/0\leq j \leq \lfloor \mathsf{log}(D)/\ell\rfloor. Therefore, the client should send Enc(yk)\mathsf{Enc}(y^k) for each such term kk. Check the code for this here.

For each batched plaintext yy, the client sends Enc(yi2j)\mathsf{Enc}(y^{i \cdot 2^{\ell \cdot j}}), for each 1i211 \leq i \leq 2^{\ell} -1 and 0jlog(D)/0\leq j \leq \lfloor \mathsf{log}(D)/\ell\rfloor. Notice that if i=0i=0, then yi2j=1y^{i\cdot 2^{\ell j}} =1, hence no need of sending an encryption of 11. 😄

You can see how both batching and windowing work together here.

  1. Recover all powers: The server computes all the necessary powers in order to evaluate the polynomials, see the code here.

Given the 22^{\ell} base representation of ee,

e=e0+2e1+22e2++2log(D)/elog(D)/ e = e_0 + 2^{\ell}\cdot e_1 + 2^{2\ell}\cdot e_2 + \ldots + 2^{\lfloor \mathsf{log}(D)/\ell\rfloor\ell} e_{\lfloor \mathsf{log}(D)/\ell\rfloor}

where 0e0,e1,,elog(D)/210 \leq e_0, e_1,\ldots,e_{\lfloor \mathsf{log}(D)/\ell\rfloor} \leq 2^{\ell} -1, we can compute

ye=i=0log(D)/(y2i)ei.\displaystyle y^e=\prod_{i=0}^{\lfloor \log (D)/\ell \rfloor} \left(y^{2^i} \right)^{e_i}.

Therefore recovering Enc(ye)\mathsf{Enc}(y^e) for each such ee involves, in the worst case, multiplying log(D)/+1\lfloor \mathsf{log}(D)/\ell\rfloor+1 (encrypted) powers. Therefore, the server computes a circuit of multiplicative depth of log(log(D)/+1)\lceil\mathsf{log}(\lfloor\mathsf{log}(D)/\ell\rfloor+1)\rceil! 😄

You can check how this step is implemented here.

This depth should be supported by the encryption scheme. So the parameters bin_capacity, alpha and ell should be chosen so that log(log(\lceil\mathsf{log}(\lfloor\mathsf{log}(bin_capacity/alpha))/ell+1)\rfloor+1)\rceil doesn't exceed the allowed depth of the scheme, which is 33.

A tradeoff appears here in choosing the right ell. Notice that the bigger ell, the lower the depth of the circuit the server computes, i.e. smaller computation time, but the more powers the client needs to send, i.e bigger communication cost.

Another tradeoff is possible when choosing the right partitioning parameter alpha. Notice that the bigger alpha, the smaller the degree DD = bin_capacity/alpha, i.e lower computation time on the server side, but the more ciphertexts the server needs to send, i.e. bigger communication cost.

  1. Doing the scalar product: This is the last step on the server side. The server evaluates the polynomials corresponding to the mini bins in a scalar-product fashion, as explained above. However, due to batching, this scalar product is a bit trickier; you can check the code here.

❗ Recall that the BFV encryption scheme that we use allows multiplying ciphertexts by plaintexts which means that

Enc(pm)pEnc(m),\mathsf{Enc}(p \cdot m) \equiv p \cdot \mathsf{Enc}(m),

for any pp and mm polynomials of degree less than poly_modulus_degree of coefficients modulo plain_modulus. Hence TenSEAL allows multiplying ciphertexts by vectors of integers modulo plain_modulus.

Let's recap all these steps and see how performing the scalar product works in our example.

Example: Let's go back to the example where each mini bin has 22 elements, which corresponds to a monic polynomial of degree 22. Consider a mini batch of mini bins whose corresponding polynomials are P1=X2+a1X+a0P_1 = X^2 + a_1X + a_0 and P2=X2+b1X+b0P_2 = X^2 + b_1X+b_0.

This means that the client encodes a batch [y1,y2][y_1, y_2] as a plaintext yy and sends Enc(y)\mathsf{Enc}(y). Because of windowing, the client also sends Enc(y2)\mathsf{Enc}(y^2) to the server. In this case, the server can simultaneously check if y1y_1 belongs to the first mini bin (i.e. P1P_1 evaluates to 00 at y1y_1) and y2y_2 belongs to the second mini bin (i.e. P2P_2 evaluates to 00 at y2y_2). Indeed, the server takes the first column of the mini batch and multiplies Enc(y2)\mathsf{Enc}(y^2) by it, takes the second column and multiplies Enc(y)\mathsf{Enc}(y) by it and adds them to the third column:

[1,1]Enc(y2)+[a1,b1]Enc(y)+[a0,b0].[1,1]\cdot \mathsf{Enc}(y^2) + [a_1, b_1] \cdot \mathsf{Enc}(y) + [a_0, b_0].

This is the result that the server sends to the client.

In general, for any mini batch, in order to evaluate the corresponding polynomials, the server computes the sum over ii of Enc(yi)×(D+1i)\mathsf{Enc}(y^i)\times (D+1-i)-th column of the mini batch. In total, the server computes and then sends alpha \cdot number_of_bins/ poly_modulus_degree encrypted results to the client (Notice that the communication is increased by a factor of α\alpha).

  1. Verdict: It's time for the client to find out the intersection of the two lists of elements.

The client decrypts the results that he gets from the server and recovers a vector of integers (corresponding to the underlying polynomial plaintext). The client checks if this vector contains any zeroes. A zero corresponds to an element from the intersection. To recover such an element we use the index where the zero is found. You can check how the client gets the intersection in the code here.

Let's go back to the example!

Example: Recall that P1(X)=X2+a1X+a0P_1(X) = X^2 + a_1X+a_0 and P2(X)=X2+b1X+b0P_2(X) = X^2 + b_1X+b_0. Also, remember that yy was encoded from [y1,y2].[y_1, y_2]. Due to the encoding, y2y^2 corresponds to [y12,y22].[y^2_1, y^2_2]. Consider p2,p1p_2, p_1, respectively p0p_0 the encodings of [1,1],[a1,b1],[1,1], [a_1, b_1], respectively [a0,b0][a_0, b_0].

The server computes p2Enc(y2)+p1Enc(y)+p0.p_2 \cdot \mathsf{Enc}(y^2) + p_1 \cdot \mathsf{Enc}(y) +p_0. and sends back the result to the client. When the client decrypts it, he gets the result p2y2+p1y+p0p_2 \cdot y^2 + p_1 \cdot y + p_0, which, after batch decoding, equals to:

[1,1][y12,y22]+[a1,b1][y1,y2]+[a0,b0],[1,1] \cdot [y^2_1, y^2_2] + [a_1, b_1] \cdot [y_1, y_2] + [a_0, b_0],

where the additions and multiplications are done component-wise, so the above computatation is equal to [y12+a1y1+a0,y22+b1y2+b0][y_1^2+a_1\cdot y_1 + a_0, y_2^2+b_1\cdot y_2 + b_0] which is nothing but [P1(y1),P2(y2)].[P_1(y_1), P_2(y_2)].

Setting the parameters

We know that we have introduced a lot of parameters along the way. 😱 Let's recap them a bit and see how we chose them in our implementation. 🤓

  • server_size, client_size: the sizes of server's and client's databases.
  • sigma_max appears in the OPRF layer, as this step maps the database entries to sigma_max-bit integers.
  • number_of_hashes, output_bits, number_of_bins, bin_capacity appear in hashing, as this step maps the previous integers, using number_of_hashes hash functions, to sigma_max - output_bits + log(\lfloor \mathsf{log}( number_of_hashes)+1)\rfloor + 1 -bit integers in number_of_bins bins.
  • alpha appears in partitioning, as server splits the bins in alpha mini bins.
  • plain_modulus, poly_modulus_degree : parameters of the HE scheme.
  • ell used in windowing.

The elements from the both hash tables (client and server) are represented on sigma_max - output_bits + log(\lfloor \mathsf{log}( number_of_hashes)+1)\rfloor + 1 bits. The HE scheme must be able to encrypt integers of this size, so we set:

sigma_max \leq log(\lfloor \mathsf{log}(plain_modulus))\rfloor + output_bits - (log(\lfloor \mathsf{log}( number_of_hashes)+1))\rfloor + 1).

Also, recall that recovering all the (encrypted) powers on the server side, we require

log(log(\lceil\mathsf{log}(\lfloor\mathsf{log}(bin_capacity/alpha))/ell+1)\rfloor+1)\rceil \leq depth.

Let's choose a set of these parameters step-by-step:

  • number_of_hashes = 33.
  • client_size = 55355535, so number_of_bins = 3/23/2 \cdot number_of_bins = 81928192. (Cuckoo hashing succeeds with probability 12401 - 2^{-40}; check Hashing failures here.
  • server_size = 2202^{20}, so bin_capacity = 536536. (Simple hashing succeeds with probability 12301-2^{-30}; check Hashing failures here) or run the bin capacity estimator script.
  • output_bits = log(\mathsf{log}(number_of_bins)=13) = 13.
  • poly_modulus_degree = 81928192 and plain_modulus = 536903681536903681.
  • sigma_max = 4040.
  • For safety, we can think of depth = 33. If ell = 11, alpha is lower bounded by 55, since bin_capacity/alpha 128.\leq 128.

We chose this 2929-bit plain_modulus to get sigma_max as large as possible. A large sigma_max guarantees that the protocol runs without any false-positives. For example, keeping client_size=5535=5535 and server_size=220=2^{20}, a 5252-bit plain_modulus guarantees that the protocol can't give false-positives, except with probability less than 2302^{-30}. The current TenSEAL implementation does not allow us to choose a 5252-bit plain_modulus while keeping the poly_modulus_degree=8192=8192.

Let's play! 🎉

You can generate the datasets of the client and the server by running set_gen.py. Then run server_offline.py and client_offline.py to preprocess them. Now go to the online phase of the protocol by running server_online.py and client_online.py. Go check it out! 😄