|
- {
- "cells": [
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "# LWE with Side Information. Attacks and Concrete Security Estimation"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "*Warning: in constrast with the paper \"LWE with Side Information, Attacks and Concrete Security Estimation\", here is used the french matrix convention: every matrix and vector is up to a transposition.* "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "One has an instance of a LWE problem: one has\n",
- " * $A \\in \\mathbb{Z}_q^{m \\times n}$ and \n",
- " * $b= Az + e \\in \\mathcal{Z}_q^m$\n",
- "\n",
- "where $z \\leftarrow \\chi^n$ and $e \\leftarrow \\chi^m$ are sampled with independant and identically distributed coefficients following the small distribution $\\chi$.\n",
- "\n",
- "Objective of the problem: *Find $z$.*\n",
- "\n",
- "*Notation, we will note $s$ for $(e, z)$. It represents all the secret elements. Even if one is not interested directly by $e$, it is equally important as $s$. And one will not $\\bar{s}$ for $(e, z, 1)$.*"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Quick reminder about the original primal attack"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "To recover the secret $z$, here is the method of the primal attack :\n",
- " 1. First, transform the LWE instance into a uSVP instance thanks to the lattice $\\Lambda = \\{(x, y, w) \\in \\mathbb{Z}^{m + n + 1} : x + Ay - b w = 0 \\text{ mod } q\\}$.\n",
- " 2. Then, reduce the lattice to find a *short vector* in this lattice. If the attack is successful, it will be $\\bar{s} = (e,z,1)$.\n",
- " \n",
- ""
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Methodology to integrate some hints"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Distorted Bounded Distance Decoding (Distorted BDD)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The Distorted BDD is a generalisation of the traditionnal BDD. Let $\\Lambda \\subset \\mathbb{R}^d$ be a lattice, $\\Sigma \\in \\mathbb{R}^{d \\times d}$ be a symmetric matrix and $\\mu \\in \\text{Span}(\\Lambda) \\subset \\mathbb{R}^d$ such that\n",
- "$$\\text{Span}(\\Sigma) \\subsetneq \\text{Span}( \\Sigma + \\mu^T \\cdot \\mu) = \\text{Span}(\\Lambda)$$\n",
- "The Distorted Bounded Distance Decoding problem $DBDD_{\\Lambda, \\mu, \\Sigma}$ is the following problem:\n",
- " * Given $\\mu$, $\\Sigma$ and a basis of $\\Lambda$.\n",
- " * Find the unique vector $x \\in \\Lambda \\cap E(\\mu, \\Sigma)$\n",
- "where $E(\\mu, \\Sigma)$ denotes the ellipsoid\n",
- "$$E(\\mu, \\Sigma) := \\{x \\in \\mu + \\text{Span}(\\Sigma) | (x-\\mu)^T \\Sigma^{-1} (x-\\mu) \\leq \\text{rank}(\\Sigma) \\}$$\n",
- "One will refer to the triple $\\mathcal{I} = (\\Lambda, \\mu, \\Sigma)$ as the instance of the $DBDD_{\\Lambda, \\mu, \\Sigma}$ problem."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Why \"Distorted\" ? Because $$E(\\mu, \\sigma) = \\sqrt{\\Sigma} \\cdot B_{\\text{rank}(\\Sigma)} + \\mu$$\n",
- "where $B_{\\text{rank}(\\Sigma)}$ is the centered hyperball of radius $\\text{rank}(\\Sigma)$."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Some remarks to understand Distorted BDD: \n",
- " * *Warning !* $\\Sigma$ has no inverse. So one uses a generalisation of the notion of inversion. One will denote $\\Sigma^{-1}$ for the matrix satifying $$\\Sigma \\cdot \\Sigma^{-1} = \\Pi_\\Sigma$$. Such a matrix can be compute thanks to the relation $$\\Sigma^{-1} := (\\Sigma + \\Pi_\\Sigma^\\bot)^{-1} - \\Pi_\\Sigma^\\bot$$\n",
- " * It is possible to interpret Distorted BDD as the promise that the secret follows a Gaussian distribution of center $\\mu$ and covariance $\\Sigma$. In fact, one will use the point of view to manipulate Distorted BDD instances during hint integration.\n",
- " * How is defined the perimeter of the ellipsoid ? $(x-\\mu)^T \\Sigma^{-1} (x-\\mu)$ can be seen as a non-canonical Euclidian squared distance $\\|x-\\mu\\|_\\Sigma^2$. And then, if one uses the point of view of Gaussian distribution of center $\\mu$ and covariance $\\Sigma$ for Distorted BDD, what is the average of $(x-\\mu)^T \\Sigma^{-1} (x-\\mu)$ ?\n",
- " $$\\begin{align*}\n",
- " \\mathbb{E}[\\|x-\\mu\\|_\\Sigma^2] & = \\mathbb{E} \\left [\\sum_{j=1}^d \\sum_{k=1}^d (x-\\mu)_j (\\Sigma_{j,k}^{-1}) (x-\\mu)_k \\right ] \\\\\n",
- " & = \\sum_{j=1}^d \\sum_{k=1}^d \\Sigma_{j,k}^{-1} \\cdot \\mathbb{E} [ (x-\\mu)_j (x-\\mu)_k ] \\\\\n",
- " & = \\sum_{j=1}^d \\sum_{k=1}^d \\Sigma_{j,k}^{-1} \\cdot \\text{Cov} ((x-\\mu)_j, (x-\\mu)_k)) \\hspace{10mm} \\text{because} \\mathbb{E} [x-\\mu] = 0 \\\\\n",
- " & = \\sum_{j=1}^d \\sum_{k=1}^d \\Sigma_{j,k}^{-1} \\Sigma_{k,j} \n",
- " = \\sum_{j=1}^d (\\Sigma^{-1} \\Sigma)_{j,j} = \\sum_{j=1}^d (\\Pi_\\Sigma)_{j,j} \\\\\n",
- " & = \\text{Tr}(\\Pi_\\Sigma) = \\text{rank}(\\Pi_\\Sigma) = \\text{rank}(\\Sigma)\n",
- " \\end{align*}$$\n",
- " * The condition $\\Sigma \\in \\mathbb{R}^{d \\times d}$ is just a technical condition. One needs it somewhere in the improved attack. "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### Global methodology"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "To recover the secret $z$ *with some hints*, here is the method of the improved primal attack :\n",
- " 1. First, transform the LWE instance into a Distorted BDD instance using the lattice of the original primal attack.\n",
- " 2. Then, integrate the hints *by transforming the Distorted BDD instance*.\n",
- " * Each type of hints needs a specific tranformation.\n",
- " * The studied types are ($v$, $l$ and $\\sigma$ are known):\n",
- " * Perfect hints : $\\langle s, v \\rangle = l$\n",
- " * Modular hints : $\\langle s, v \\rangle = l\\text{ mod }k$\n",
- " * Approximating hints : $\\langle s, v \\rangle = l + \\varepsilon_\\sigma$\n",
- " * ...\n",
- " 3. Transform the Distorted BDD instance into a uSVP instance.\n",
- " 4. Then, reduce the lattice to find a *short vector* in this lattice.\n",
- " \n",
- ""
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Some important remarks:\n",
- " * When one integrates a hint, the lattice is only **restricted**. So, if one find the solution to the last Distorted BDD, it will **directly** be the solution of the first Distorted BDD (no need to transform the solution). And it will be **directly** give the solution of the LWE, because it will be in the form of $\\bar{s}=(e, z, 1)$.\n",
- " * To complete the previous remark, when one gets the short vector from the *lattice reduction*, the only transformation to get the solution of the LWE problem is the inversed transformation of the step 3 \"Distorted BDD $\\mapsto$ uSVP."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### [Section 3.2] Transform the LWE instance into a Distorted BDD instance using the lattice of the original primal attack"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Denoting $\\mu_\\chi$ and $\\sigma_\\chi^2$ the average and variance of the LWE distribution $\\chi$, we can convert this LWE instance to a $DBDD_{\\Lambda, \\mu, \\Sigma}$ instance with:\n",
- " * $\\Lambda = \\{(x, y, w) \\in \\mathbb{Z}^{m + n + 1} : x + Ay - b w = 0 \\text{ mod } q\\}$\n",
- " * $\\mu = (\\mu_\\chi \\cdot\\cdot\\cdot \\mu_\\chi 1)$\n",
- " * $\\Sigma = \\left ( \\begin{array}{cc} \\sigma_\\chi^2 I_{m+n} & 0 \\\\ 0 & 0 \\end{array} \\right ) $\n",
- " \n",
- "The lattice $\\Lambda$ is of full rank in $\\mathbb{R}^d$ where $d := m+n+1$ and its volume is $q^m$. A basis is given by the column vectors of\n",
- "$$\\left ( \\begin{array}{ccc}\n",
- "q I_m & A & b \\\\\n",
- "0 & -I_n & 0 \\\\\n",
- "0 & 0 & 1 \\\\\n",
- "\\end{array}\n",
- "\\right ) $$"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "**If $s=(e,z)$ is the solution of the LWE instance, when $\\bar{s}$ is the solution of this $DBDD_{\\Lambda, \\mu, \\Sigma}$ instance ?**\n",
- "$$(\\bar{s}-\\mu)^T \\Sigma^{-1} (\\bar{s}-\\mu) = \\left (e ~~ z ~~ 0 \\right ) \\left ( \\begin{array}{cc} \\frac{1}{\\sigma_\\chi^2} I_{m+n} & 0 \\\\ 0 & 0 \\end{array} \\right ) \\left ( \\begin{array}{c} e \\\\ z \\\\ 0 \\end{array} \\right ) = \\left (e ~~ z ~~ 0 \\right ) \\left ( \\begin{array}{c} \\frac{1}{\\sigma_\\chi^2} e \\\\ \\frac{1}{\\sigma_\\chi^2} z \\\\ 0 \\end{array} \\right ) = \\frac{1}{\\sigma_\\chi^2} \\left ( \\|e\\|^2 + \\|z\\|^2 \\right ) = \\frac{1}{\\sigma_\\chi^2}\\|s\\|^2 $$\n",
- "\n",
- "So, $\\bar{s}$ is the solution if this $DBDD_{\\Lambda, \\mu, \\Sigma}$ instance if, and only iff,\n",
- "$$\\frac{1}{\\sigma_\\chi^2}\\|s\\|^2 \\leq \\text{rank}(\\Sigma) = d-1$$\n",
- "So, $$\\frac{1}{d-1} \\sum_{i=1}^{d-1} s_i^2 \\leq \\sigma_\\chi^2 $$\n",
- "\n",
- "And it is the case up to a constant probability."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 19,
- "metadata": {},
- "outputs": [],
- "source": [
- "import numpy as np\n",
- "drange = range(1, 2*640+1+1, 10)\n",
- "probas = np.zeros((len(drange), 3))\n",
- "for i, d in enumerate(drange):\n",
- " nb_experiments = 100000\n",
- " secret = np.random.normal(0, 1, size=(nb_experiments, d))\n",
- " data = np.var(secret, axis=1) > 1\n",
- " m, t = np.average(data), 1.960*np.sqrt(np.var(data)*nb_experiments/(nb_experiments-1))/np.sqrt(nb_experiments)\n",
- " probas[i] = [m, m-t, m+t]"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 20,
- "metadata": {},
- "outputs": [
- {
- "data": {
- "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXcAAAD8CAYAAACMwORRAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMi40LCBodHRwOi8vbWF0cGxvdGxpYi5vcmcv7US4rQAAIABJREFUeJzt3XmUXGd55/HvU3vvrW61ZFn75kWyjSTaG8YQjG0Qi00SJ8gnwxgGcELGgQTOJCYMzgSSyYHkEDKMM2xxDjgJsoGAFWysYGJisI1tGduSZW2ttVtr71t1bfe+88etllutanVZllR9W7/POX1Udevtuk/d7v7pqffeW9ecc4iIyPQSqXQBIiJy5incRUSmIYW7iMg0pHAXEZmGFO4iItOQwl1EZBpSuIuITEMKdxGRaUjhLiIyDcUqteKZM2e6RYsWVWr1IiKh9Pzzz3c551omG1excF+0aBGbNm2q1OpFRELJzPaXM07TMiIi01BZ4W5m7zSzHWbWZmZ3l3j8g2bWaWYvFr8+cuZLFRGRck06LWNmUeBe4CagA3jOzDY4514ZN/QB59xdZ6FGERF5jcrp3K8C2pxze5xzOWA9cOvZLUtERF6PcsJ9LtA+5n5Hcdl4v2lmm83se2Y2/4xUJyIip6WccLcSy8Zf4ePfgEXOuSuAx4BvlXwiszvNbJOZbers7HxtlYqISNnKCfcOYGwnPg84NHaAc67bOZct3v0G8MZST+Sc+7pzrtU519rSMulhmiIicprKCffngOVmttjMEsA6YMPYAWY2Z8zdW4BtZ65EkeltT98edvXuqnQZ54RzjkNDhyYfeJbWnfNyZY/f1buLb275Jg/ueJAnOp4g7+XLXs+R4SNkvexJjx0aOsQD2x9gb//esus4XZMeLeOcK5jZXcBGIArc55zbamafAzY55zYAHzezW4AC0AN88CzWLBIazjnMSs1sBgZyA3xo44cYKYzw7bXf5pKmS8p+7oHcAO0D7VzafCkRi9A+2M5nn/wseS/PX13/VyyoX1D2c/Vl+jiSPkJdoo4ZyRlUx6sB8J3Pkwef5BcHf8GC+gWsbF7JiuYVJKKJE74/7+fZ2buTLZ1b2NK1hb5sH7ctv423zn8rEYscX8c9T93D4+2P8+HLPswn1nzihG0zmBtka/dWohalNl5L3s/TPdJNxsswv24+i+oXUZuoPT6+N9PLzw/+nPbBdo4MH2Fh/ULeu+S91MRrWL9jPQ+1PcSShiXcsOAGhvJD/GDXD9jVt4urL7iatYvXUnAFXul+hUwhw8rmlaycuZKF9QtpSDTwj1v/kXtfvJeCXzi+vtbZrfzdDX9HfaKekcII23u2s7RxKfWJerJelueOPMfP2n/GEx1PcHj4MACzqmfRnGomFUsxkB1gd/9uAO664pP87urFZf98TodV6gLZra2tTmeoypmWKWSIR+JEI9FJx2a9LLv7dpOMJpldPfuE4Hi98l6eb2z5Bve/cj/vXfpefveK36W5qvmkcX/xy7/kge0PEPHraK5N8Z13/wuza2bTl+kjHo1TE685YbxzjnQhzQM7HuCbW77JYG6QhfULedv8t/HdHd8l5zmMCPGo41NXfoo5NXNI59OMFEZO+BrOD9OT6aFrpIu9/Xs5mj56wnpmVc/ikqZLaB9sZ2//XqIWx3NB51odq+ZNF76J5TOWc3DoIPsG9rGjZ8fxTrUm2ogRZcjrZlnjMtbMWkNDsoENuzfQme4mkl1KIbmDW5bewrsXv5tnjzzLs0eeZWv3Vnznn3K7LqxfyKqWVaQLaR5vf5yCX8AwqmONDBd6iViEVDRFupCmKXoxOboY8roBmBFdgsssxku9zKAXvN5UtJaYJRkqdB9fRyKSIOfniGdW0du+FoBY7U6q5/yQpY1LuGnRTazfvp6eTA+GsaRhCYeGDzFSGCEeSdLISob6FpNK5khW9WDRDM6y+H6UgZ7FdHcu5s9vfjMfePNFr+VX6jgze9451zrpOIW7nAv92X6q49XEI3EAejI9dAx2cNnMy453dqcylBuiY6iDpQ1LiUfjJzzWk+nhpWMv8W97/o2ftf+MxmQjty67lZXNK3n2yLM8f/R5+jJ9DOWHSEQTNKeaiUVi7O7bTcG92pk1Jhu5aMZFLGtcRkt1CzOSM1g9azVLGpcAkPNy/OLgL/jV0V/xUudLRCxC6wWtXNJ0Cb2ZXo6mjxKxCNWxah7Z+wjbe7ZT65aRtj1Uxau46oKrqEvUMbd2Lr+5/DfpyfTw/h+tY1VfPdcMpPja/F5m1TSBBW/rAWrjtdQmaskWsmS8DJlCBlc8nqHWu4z+7otpvmAzff4ukoVlLN6/kgQ+2xe+Qi62e8LtmYikqIo2kLR6rDCT9NBsBgbrqUrkSFaNkKruJB/twHlJOg+9kVTvQvpiRrLmELNn7yOX2MqQ101drJkqm42fmUtn12yGBy5knpdjBgPsahxm5txN5OhixBsg4WbRsu8aPsZL/EXtHEZangYgYlGaosvIDy/hyNELcS5CKpHD8yPkcjXgYqSqepnR2EtV7UEG2YUDYulWjhxcSSTbzAU2SGesQOOc7cSTg/S2r2Ctf4xX8nPYnowT842b/cO8pfYAD/ZfyguJOny/ipY8VFuOA9EqEtVHaWocIJnqo+dIA3/jPcfb/WfIJRrosiY+UbiG3fOfwSNDg7uczsOX09I0QKq2g3y2kcOHF1M/1MR7Utu4qXYfPa6ObdmZHMzV0FeI0xxNc1v9K6zJPU/mxv9N41W3l/8HNIbCXc6Yo8NHOTh0kMZUI82pZhqSDZN+T97Ls7t/N88efpaN+zeyuXMz8UicZY3LyPt52vraALhoxkXcteourpt7HfFInKyXZf/AfvYN7KN7pJueTA8vdr7I80efp+AXSEaTrGxeSTwSZzA/yJHhI/RkegCoijaQyq7B4r30us04fOKRJE2RS4j4jfheAjMPiw3ikyc/cgFd3S3EY47amiFSVT148UMMeAfJ+SPHX8uaWWtYPmM5G/dtpC/bR8zizIgtweHRnd+DI+g2DTsevElrwDtwI+/PHuOHthxbsIVUTSd5l2Yg30U0EqEhMYP+4QH+s6ONet/xZ4nr2HhhnNroLNJDFxA1I1k1SCSSwbkEzo/heXHyhSh9R5r4ONu4vno/3xps5WFvGZ9JPMo69+8APMjbucfeQo4E+EmcHwc/gfMTRFyUWrI02wAt9LG6uourqw8y17oZtGq6/TpeGmnhqeG5LIsd4w9rH2PeyHZy8Xo6Uhfxs+zFPDi4kjZmcyEDLI8c5ob6g1wZ283CzHaS+T4ADqaW84WhtTxduJh+qvlw/N/5H7HvEnEF+iON/Aa/Q3ukhcTwLN4U3cstDbtZHWnDWZS0VRF3Ber9PuJ+hmOxOez25/DE0Dx+ll1KDRl+v+GXvJ1nqc0cwXDkYzU8Eb+e9nw963iUVGEAgMHUHOJehlS+9/jPdCh5AVHyVGWDjj0fr+Ng6iL224UcyNXzvsKj1PoD2JoPgPNxB36J69zJx7mdn7mVfDTyCu+p2cYOt4CfDs7niqoubo69wJzh4u7G6mbIDsL4Of6qGbDsRrjyo7Dg6tf0dzhK4S4ncc7hOQ/f+QzkBugY7KA7080VM6+gpfrko5cODh3kG5u/wUO7Hzph7rEp1cTihsXk/TxHh4+SiCa4eeHNvHnum3m562Ueb3+cLV1byPvB2/gZscUUBleQiOeJpY7ggHT/QnqHEjTOeZJhP+hSx4bjWI2x+bj0pfT3zaS56RjRqg7MIOKq8Au1DA/NpKtnJkuHjTvqX2J/ro5/zq8gHc/RmKnj+vhe5qdGaIxmyLooB7K1pAsR3lJ3kCtiB8hZkkP+DHZkm3ly6AJ2+PPotSQWG6Jx5jaqmjeR9rup91dxuONyUsOzaY3sp0CETW4e2cQQrlBHrJDCJ4IX8XiXPc/fVP0T1YV+vEiC79i7WJ9uZYAaumI+dfNeIJN8lk8dG2JdIkXi4pvhma/ypcJtjCRbeFN1BwXi7C800u2lqIkUqI3kqbEcdZbmrdnHqcr3Qf1cGDiII4Lhw7V3QSQGT36ZgeQcMokZJPwMcX+EuJcmWhgh6pfYqZish8YFkB2A4W7ID7/6WPNyuOL9MNABB38FR7YADlf8iVH86dFyCcx7I8x9I1gEnvoKdLeduJ4V74M3/QHuu3eQH+qhK7WYOentmPMgmgy+NxIN6ojEoGYWxJLQsyd4rnz61eeKxOGid8DslVB/IbQ/C1t/GNR+8bvgmt+Hvv2w/RGIJWDV78D8q2DnRnjloeA1X7gK4lVw6AU4/FKwnpFemH0Z/PpX4YLLg3Vlh+C7d0DbY+Sj1cS9NLRcCr17oZAJXv+8K4N6Ll4Ls1aA82HgUPB8+TREEzDnDcHrex0U7iHWPdLN5s7NXDXnqpPmXD3fY0vXFvb276V9sJ35dfO5ceGN1CXqjo9xzvHUoad45vAz7OrbRftgO/3ZfgZzg3jOK7nOy2deTuvsVpbPWA7Aw3se5unDT2NESYxcS2/nMmqrc1RXDVNd040fO4YRx7xGCgzQ42/BL3awTbFFWOYiuntayA7M5nr/EO9v3Ea3q+Png3NImce769uY547wlf5r+GltNS1NI0SjBTwvQm9/A8NDTbhCPTEvzprIHtY1vMLFsSO8UFjIxv4F5IgxM5ZheaqfVcnDrPC205JuC0LF+XiRBP2peTSl90y8oS0aBJJfgIGDkBs6/pBvMbKJRl6OXsqX+65jh5vDHdWbeH/yaWal204Y11+9gOp8H8lc8A7CiySCAJ37Rnj7n8FL63Evfed4EHqRBD+OvZ3OtM+HYhvhAz+AxW+FBz4AOx4OnjjZAM47oabjoklYdB28/R6Yswr2/ids+ze45D2w9G3BmLbH4Ol7g9eYqIZ4TfHfakjUQqIGamZCTQs0LYEZi2B056Zz0HcgCLtkXVBbZMzU2dAx2PloMGbGouD7Z18GqfoT6/Q92P049O2DdC80LYbLfjNYz8Bh+OHvQS4NS94Ki98C866CeGrin5fvwbFtcODp4DlW/DrUjNuPkR2CTB80zJv4eSaTGQi2UWTcdKGXh41/CsOd8KaPw9w1UMjBsa1QPw9qz83h3Qr3EErn09z/yv3c9/J9pAtpqmJVvGPRO7i06VIakg3s6d/DQ20PHd/5NdrpJqNJrr3wWpY2LD2+46qtr42oxWmMzSXhZuMXqvEKKXw/Bi5KNhejd6COfD5FS0s7VQ076PP2Hd9pVhttwRtYTfzwAv606ikuSx1jwOrp9BvYkp3FpqGZJCMeF1UPMpAzHsovJ1d7DJeezbX+UdY27GdVbB9LRl4mkR8IQiU/wvHz3xK1UN0EfQc4nFrGr+JrSLs4VWRZFjnMBYWD1OR7iOeDt9ZE4jBjIXTv5qRz6FKNQbisuBUuvw362+FX9wdd2KLrYPGvBZ1dsjb4Ax06FnRSLZcEgTdq8Cgc3QKdO4I/4MGjsPPHQec1av7VsOymoAP087DvF8H42llQd2EwJjcUdMFv/BBEiwekdbVB144gOA48hXtpPebl8Fe8j8hvF8/5K2SD5xsNTDPI9Adv72NVQYcZS50cOnJeUbhPQc45NndtZiA7QFOqiZp4DTk/R1+mj0f3Pcqjex9lMD9ITWEVnYffwIVzdpNOPH98/teIMMMu49ihy8gMzyWRr6aQOsacea9g1bsYLBzDcwXqIwvoOXQt0d5lLLcjXF7VxQWJEZqjGaoieaI4Gm2YuRyjzutlM8v5Tv9lPFVYRjqRhkiWVdk0H294irdkH8ciMezCVUHIDR4J3jKP41ucAzUrmZtpI14YCjromRfB3FZYcQsseVsQhse2ARa8PTWDl78PT/xN8Pa5kAneujYvg+alUDcHqmfCrEuDjjRZF4TdoReD703WQ+1sqLvg1a7zTMtnYNsG6NkLK38dWk7vCIeTDB4JphAu/62Tu0+RU1C4TwHOOQZyAwxkB9jZu5P7tt7H5s7NJcfGLUlNYTX9By7hjyMvcVNyK/9eWM3Xhq7hYKQOoiPUej7vj2/l9toXmFPooDpzlKHkLB7mev5pYDVtbja5WI73sp1P1T3G/JES55JF4hCNB2/LZywKut4DTx9/+z+UuhAfqM8cCrrtNXfAdZ+A+jmjLwqGjkLXzmB6oGFu0Alv+V4wPXDhKrjkvbDozUGn/Fr4xcPg1JmKTEjhfg4U/ALbe7bTNdJFxCLEI3FmVc+iPlHPT/b/hPU71p9wJlpNpIWho9eTHp5NKjFCKpWnUIiSzUdpGqrlw9XP8wEeJlEYwuauwR16AXM++Xg92VgdVdkuon422JFz4ergrf+hF3BtjwU7pIBCtJqYlw663yvWBTubZi4P9t4n61+dJjjhhWRh/5PBzrJj2yA3HExxXPre1x7QInJWlRvuFbvMXpjt7N3J37/49zxz+BmG8iV2eBU1RpZSn34fmWwVA0PVXDkywB83PE5zTZqBaCPDVJOI5amxfuYltmAFB8vfEewou+AyrP8gvPx94v3txDP9wWFUb1gX7EQbMw1hQ52w7+fQs4fYwCFYflPwPOV2wLEkLL0h+BKRaUHhXibP9zg8fJgHdz7It7d+m7hVU+u9kYGj8xgZmQE4zPLU1qRJpYbp65rF1f5h1jbspzkyxLz4XmYW9kB8Psy6lJbhTsh2B8Ear4LWu4PgnrHo1ZU2zIXrPj55cbUtcNlvnK2XLiIhpHCfwFBuiEf3PcpLnS/xctfL7BvYd/xY73j6Gpo7Lua22v28NfFTmmp7cUQoWJzuSBO92WreGP82VfleyDUEO8xmzYHVfxwczTHuDEsRkTNN4T5G3s8zmBvk4T0P843N36A320t1tJ46W0Jj7kYGhxqwniq+FP8J18cfhCzQsAJmXAQ4yI9wweCRYIfjkmvhqo8Gh+FpB6GInGMKd2Bf/z4+9tjH6BjqOL6s1r+U4f3raMwmWZ3YT2tNJ8tie7g89jRxPLjhs8GRJOfoxAURkdfivA/3o8NH+ei/30lPepim3HvJ5ZL0d9XyAXbwwfhfU01w1iFpg8b5cMnNwVmHTWf34zpFRF6P8zbcezI97O7bzV/88i85NtzLte2reU/VMRojadbEnqY63wOL18JFNwenkbdcEuz8FBEJgfMu3J89/CxfeO4L7OzdCYAR49aD8/m89wCMJCHVAAtWwds+A/MmPZRURGRKOm/CfaQwwp89+Wf8eN+PqbYWaobfR3dPE7fntvC5yPeDj+B811+fvdPYRUTOofMm3L/yq6/w430/xnpv4sruJL/d2MbSqg7mFjbDyt+AtV9UsIvItHFehPuLx17k/m3/RG3vCr6VfpqLoq+A1xzMo7/hj+DX/lSHK4rItDLtwz1TyPDpn/9PEvkaNvb/hLrqJlj79/CG2xXoIjJtTetwT+fTfPbJz9IxtJ+/6MpRVT8f+73/hKrGSpcmInJWTdtwb+tt45P/+Un29e/n+s5Z3JrZBOseVrCLyHlhWob7cH6YDz76IUZyjoXtN/B/C/+Iu/Kj2KI3V7o0EZFzYlpOOn9vx/fpz/XxpgPL+KF3P9a0BLvxf1W6LBGRc2bahXvez/PNzd9iwUiSr3g/ILrsbdhHHtNFJ0TkvDLtpmU27t1IX/4Yn+/vxL/mLiI3f15HxYjIeWdapZ5zjr9/4R+YlYtydSFB5AYdvy4i56dplXybjm6ifbiNj/V3Er/qI8FFoEVEzkPTKty/v/0RYr7xjnSO2LW/V+lyREQqZtqEu3OOJ9of59qREWKXvR9qZ1W6JBGRipk24b67bzeDfjdvGxmm6vo/qHQ5IiIVVVa4m9k7zWyHmbWZ2d2nGHebmTkzO+cfhP6jtscAuMK1QMvF53r1IiJTyqThbmZR4F5gLbACuN3MVpQYVwd8HHjmTBdZjo17fsIl2Rwty9ZWYvUiIlNKOZ37VUCbc26Pcy4HrAduLTHu88AXgcwZrK8sPZkeOkZ28rb0CDNW33KuVy8iMuWUE+5zgfYx9zuKy44zs9XAfOfcj85gbWX7j/1PgMHVmQg2/+pKlCAiMqWUE+6lLk/kjj9oFgH+FvjUpE9kdqeZbTKzTZ2dneVXOYkNOx6jueAzb85bIBI9Y88rIhJW5YR7BzB/zP15wKEx9+uAy4Cfmdk+4BpgQ6mdqs65rzvnWp1zrS0tLadf9Tj7el/mDdkMTavfd8aeU0QkzMoJ9+eA5Wa22MwSwDpgw+iDzrl+59xM59wi59wi4JfALc65TWel4nHyfp5+18OivEf8ohvPxSpFRKa8ScPdOVcA7gI2AtuAB51zW83sc2ZW8b2XB/rb8c3RGJkNqfpKlyMiMiWU9amQzrlHgEfGLbtngrG/9vrLKt+mQ9sAmF29+FyuVkRkSgv9GapbOl4EYPGsVRWuRERk6gh9uB/o2crsQoG5i1dXuhQRkSkj9OF+NNPBknye+vkrK12KiMiUEepw951PF/0syDuonzv5N4iInCdCHe5Hh4+Si/i0uAawUudaiYicn0Id7i8e2QHArNS8ClciIjK1hDrcX+jYDMDCpssqXImIyNQS6nDfe+wl6j2P+QvXVLoUEZEpJdThfiS9nyX5As0Lr6h0KSIiU0qow73TdbMoXyDSvKTSpYiITCmhDffB3CDDkQIz/VqIlvUpCiIi543QhnvfyDAA0cTMClciIjL1hDbc+4cHAPCqZ1W4EhGRqSe04V4Y6QPAxfQxvyIi44U23PP5LACRSLzClYiITD2hDfdcIQdATOEuInKS0IZ7oZABIKojZUREThLacM8XO/doROEuIjJeaMM9Vwjm3GORRIUrERGZekIb7nmvOOeuaRkRkZOENtwLoztUo+rcRUTGC2+4ezpaRkRkIiEO9zwAUXXuIiInCW245/1gh2o8pnAXERkvtOHuFYLOXXPuIiInC224F/wg3OPRZIUrERGZekIb7p5XADQtIyJSSmjDfbRzjyncRUROEtpw94rhnoilKlyJiMjUE9pwL/ialhERmUhow320c4/HtENVRGS88Ia7Czr3ZFzTMiIi45UV7mb2TjPbYWZtZnZ3icd/z8y2mNmLZvYLM1tx5ks9kT86LRNX5y4iMt6k4W5mUeBeYC2wAri9RHj/i3PucufcKuCLwJfOeKXjFIqde0Kdu4jIScrp3K8C2pxze5xzOWA9cOvYAc65gTF3awB35kosbbRzV7iLiJysnA9Dnwu0j7nfAVw9fpCZ/Xfgk0ACuKHUE5nZncCdAAsWLHittZ7Adx4ACU3LiIicpJzO3UosO6kzd87d65xbCvwJ8D9LPZFz7uvOuVbnXGtLS8trq3Qcrxju2qEqInKycsK9A5g/5v484NApxq8H3vd6iiqH5zxizhGL6zh3EZHxygn354DlZrbYzBLAOmDD2AFmtnzM3XcDu85ciaX5ziPqHKYLZIuInGTSZHTOFczsLmAjEAXuc85tNbPPAZuccxuAu8zsRiAP9AJ3nM2iITjOPWqAlZo1EhE5v5XV9jrnHgEeGbfsnjG3P3GG65qU73yiZ/2YHBGRcArtGao+Xnn/M4mInIfCG+7OJ6LOXUSkpPCGOx7RShchIjJFhTbcPc25i4hMKLTh7uOHt3gRkbMstPnooc5dRGQioQ13hyNa8pMRREQktOHuoaNlREQmEtpwD+bc1bmLiJQS4nDXtIyIyEQU7iIi01Bow93DEXEKdxGRUkIb7j5Oc+4iIhMIbbh7pnAXEZlIaMM96NxDW76IyFkV2nTUtIyIyMRCG+6eQTS85YuInFWhTUdNy4iITCy06egZmpYREZlAeMMdiOhyHSIiJYU23H2DiIW2fBGRsyq06eihHaoiIhMJbTp6BhHTtIyISCnhDXfQ0TIiIhMIbToWDKLq3EVESgpluDvn8Mw0LSMiMoFQhrvne4A6dxGRiYQy3HOFHKBwFxGZSCjDPZvPAhCJxCpciYjI1BTKcM/k0oAOhRQRmUgowz2XCzr3qKlzFxEppaxwN7N3mtkOM2szs7tLPP5JM3vFzDab2U/NbOGZL/VVuUIGgJimZURESpo03M0sCtwLrAVWALeb2Ypxw14AWp1zVwDfA754pgsdK18ozrlrWkZEpKRyOvergDbn3B7nXA5YD9w6doBz7nHnXLp495fAvDNb5omy+aBzj6pzFxEpqZxwnwu0j7nfUVw2kQ8DP349RU0mnx+dc4+fzdWIiIRWOa1vqStiuJIDzf4L0Aq8dYLH7wTuBFiwYEGZJZ4sV5yWiUYV7iIipZTTuXcA88fcnwccGj/IzG4EPgPc4pzLlnoi59zXnXOtzrnWlpaW06kXgELxJCbtUBURKa2ccH8OWG5mi80sAawDNowdYGarga8RBPuxM1/miQre6Jy7OncRkVImDXfnXAG4C9gIbAMedM5tNbPPmdktxWF/DdQC3zWzF81swwRPd0bkCnkAYgp3EZGSyprXcM49Ajwybtk9Y27feIbrOqXRQyFjmnMXESkplGeoFrxi565wFxEpKaThPnq0TKLClYiITE3hDPfinHtc4S4iUlI4w90vfp67wl1EpKRQhrtXnHOPxxTuIiKlhDLc88XOPa4dqiIiJYUy3D2/AEA8lqpwJSIiU1Mow/34oZCxZIUrERGZmkIZ7n6xc09oh6qISEmhDPfC6LRMXJ27iEgpoQx3zwXTMom45txFREoJZ7iPTstoh6qISEmhDPfjc+6alhERKSmU4e65Yrgn1LmLiJQS0nD3AEhqzl1EpKSQhvvotIzCXUSklHCGu6/OXUTkVEIZ7r7ziDlHNKbPlhERKSWU4e45j6gDzCpdiojIlBTKcHd4RHGVLkNEZMoKZbgf79xFRKSkUIa7j0+00kWIiExhoQx3z3lE1LmLiEwolOHuO59YpYsQEZnCwhnu+OrcRUROIbThrjl3EZGJhTLcPXwiTse4i4hMJJTh7uPUuYuInEJIw90nijp3EZGJhDTcnaZlREROIaTh7hNR5y4iMqGywt3M3mlmO8yszczuLvH4W8zsV2ZWMLPbznyZJ/Jw4fxfSUTkHJk0I80sCtwLrAVWALeb2Ypxww4AHwT+5UwXWIqPI+oU7yIiEynnRM+rgDbn3B4AM1sP3Aq8MjrAObev+Jh/Fmo1sTLHAAAIpUlEQVQ8iY8jot5dRGRC5STkXKB9zP2O4rKK8cxpzl1E5BTKCfdSKXpaJ/+b2Z1mtsnMNnV2dp7OUwCjx7mrcxcRmUg5CdkBzB9zfx5w6HRW5pz7unOu1TnX2tLScjpPAYAH6txFRE6hnHB/DlhuZovNLAGsAzac3bJOzTfNuYuInMqkCemcKwB3ARuBbcCDzrmtZvY5M7sFwMyuNLMO4LeAr5nZ1rNZtAdETOEuIjKRsj4W3Tn3CPDIuGX3jLn9HMF0zTnhG+rcRUROIZQJ6WmHqojIKYUyIdW5i4icWigTsgBETB/6KyIykVCGu2cQ1Q5VEZEJhTIhPYyILtchIjKhcIa7aVpGRORUwhnuaFpGRORUQpeQ+XweZ5qWERE5ldCFezafASASKev8KxGR81Lowj1XDPeYKdxFRCYSunDP5kcA7VAVETmV0IV7Lhd07tGIwl1EZCKhC/dsoTjnbvEKVyIiMnWFLtzz+RwAUe1QFRGZUOjCPVecc48p3EVEJhS6cC946txFRCYTunDP5rMARCOacxcRmUjowj1fCMI9pnAXEZlQ6MK9UAz3aFThLiIykdCFe97LA+rcRUROJXzhrmkZEZFJhS7cC8XOPRpNVLgSEZGpK4ThHnTucc25i4hMKIThXpxzV7iLiEwofOHuj4Z7ssKViIhMXeEL99HOPaY5dxGRiYQu3D0/+PiBREydu4jIRMIX7l4BgLg6dxGRCYUu3I/PuSvcRUQmFMJwL3bu0VSFKxERmbpCF+5+sXNPxDXnLiIykbLC3czeaWY7zKzNzO4u8XjSzB4oPv6MmS0604WO8oqdezKuzl1EZCKThruZRYF7gbXACuB2M1sxbtiHgV7n3DLgb4EvnOlCR3lOO1RFRCZTTud+FdDmnNvjnMsB64Fbx425FfhW8fb3gLebmZ25Ml812rnH41Vn4+lFRKaFcsJ9LtA+5n5HcVnJMc65AtAPNJ+JAscb7dw15y4iMrFywr1UB+5OYwxmdqeZbTKzTZ2dneXUd5K5jRdxZbaa6lTNaX2/iMj5oJxw7wDmj7k/Dzg00RgziwENQM/4J3LOfd051+qca21paTmtgj9yy59z353PUFfTeFrfLyJyPign3J8DlpvZYjNLAOuADePGbADuKN6+DfgP59xJnbuIiJwbsckGOOcKZnYXsBGIAvc557aa2eeATc65DcA/APebWRtBx77ubBYtIiKnNmm4AzjnHgEeGbfsnjG3M8BvndnSRETkdIXuDFUREZmcwl1EZBpSuIuITEMKdxGRaUjhLiIyDVmlDkc3s05g/2l++0yg6wyWc66Fuf4w1w7hrj/MtUO4659KtS90zk16FmjFwv31MLNNzrnWStdxusJcf5hrh3DXH+baIdz1h7F2TcuIiExDCncRkWkorOH+9UoX8DqFuf4w1w7hrj/MtUO46w9d7aGccxcRkVMLa+cuIiKnELpwn+xi3ZVmZvPN7HEz22ZmW83sE8XlTWb2EzPbVfx3RnG5mdn/Kb6ezWa2prKvILhurpm9YGY/Kt5fXLzw+a7ihdATxeXn7MLo5TKzRjP7npltL/4Mrg3Ztv+j4u/Ny2b2HTNLTdXtb2b3mdkxM3t5zLLXvK3N7I7i+F1mdkepdZ3D+v+6+Luz2cx+YGaNYx77dLH+HWb2jjHLp2YmOedC80XwkcO7gSVAAngJWFHpusbVOAdYU7xdB+wkuLD4F4G7i8vvBr5QvP0u4McEV7O6BnhmCryGTwL/AvyoeP9BYF3x9leBjxVv/z7w1eLtdcADU6D2bwEfKd5OAI1h2fYEl6vcC1SN2e4fnKrbH3gLsAZ4ecyy17StgSZgT/HfGcXbMypY/81ArHj7C2PqX1HMmySwuJhD0amcSRUv4DX+MK4FNo65/2ng05Wua5KaHwJuAnYAc4rL5gA7ire/Btw+ZvzxcRWqdx7wU+AG4EfFP8auMb/wx38GBJ/xf23xdqw4zipYe30xHG3c8rBs+9FrETcVt+ePgHdM5e0PLBoXjq9pWwO3A18bs/yEcee6/nGP/Trwz8XbJ2TN6LafypkUtmmZci7WPWUU3yavBp4BZjvnDgMU/51VHDbVXtOXgT8G/OL9ZqDPueKVyU+s75xdGL1MS4BO4B+L00rfNLMaQrLtnXMHgb8BDgCHCbbn84Rn+8Nr39ZT6mcwzn8jeLcBIaw/bOFe1oW4pwIzqwW+D/yhc27gVENLLKvIazKz9wDHnHPPj11cYqgr47FKiBG8zf5/zrnVwDDB1MBEplT9xfnpWwne9l8I1ABrSwydqtv/VCaqdUq+BjP7DFAA/nl0UYlhU7Z+CF+4l3Ox7oozszhBsP+zc+5fi4uPmtmc4uNzgGPF5VPpNV0H3GJm+4D1BFMzXwYaLbjwOZxYX1kXRj+HOoAO59wzxfvfIwj7MGx7gBuBvc65TudcHvhX4E2EZ/vDa9/WU+1nQHGn7nuA33HFuRZCVP+osIV7ORfrrigzM4Jrym5zzn1pzENjLyJ+B8Fc/Ojy/1o8muAaoH/0be255pz7tHNunnNuEcG2/Q/n3O8AjxNc+BxOrn3KXBjdOXcEaDezi4uL3g68Qgi2fdEB4Bozqy7+Ho3WH4rtX/Rat/VG4GYzm1F853JzcVlFmNk7gT8BbnHOpcc8tAFYVzxCaTGwHHiWqZxJlZ70P40dIO8iOAJlN/CZStdTor43E7wt2wy8WPx6F8Fc6E+BXcV/m4rjDbi3+Hq2AK2Vfg3Fun6NV4+WWULwi9wGfBdIFpenivfbio8vmQJ1rwI2Fbf/DwmOwAjNtgf+HNgOvAzcT3B0xpTc/sB3CPYN5Ak62A+fzrYmmNtuK359qML1txHMoY/+7X51zPjPFOvfAawds3xKZpLOUBURmYbCNi0jIiJlULiLiExDCncRkWlI4S4iMg0p3EVEpiGFu4jINKRwFxGZhhTuIiLT0P8HlCSlmA8FrMYAAAAASUVORK5CYII=\n",
- "text/plain": [
- "<Figure size 432x288 with 1 Axes>"
- ]
- },
- "metadata": {},
- "output_type": "display_data"
- }
- ],
- "source": [
- "import matplotlib.pyplot as plt\n",
- "plt.plot(drange, probas[:,0])\n",
- "plt.plot(drange, probas[:,1])\n",
- "plt.plot(drange, probas[:,2])\n",
- "plt.show()\n",
- "None"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### [Section 3.3] Transform the Distorted BDD instance into a uSVP instance."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "One has a Distorted BDD instance $DBDD_{\\Lambda, \\mu, \\Sigma}$, and one wants to transform it to a uSVP instance. How to proceed ?\n",
- "\n",
- "What is a uSVP instance ? It is a **BDD instance**, where the given vector is **zero**. So, one must:\n",
- " * **homogenize** *ie* transform our $DBDD_{\\Lambda, \\mu, \\Sigma}$ instance into a **centered** $DBDD_{\\Lambda', 0, \\Sigma'}$ instance.\n",
- " * **isotropize** the instance. After centering, one always has a **distorted** instance. It means that the notion of distance is not the same depending of the direction. So, one must undistort the instance into a $DBDD_{\\Lambda'', 0, \\Pi_\\Lambda}$. And then, it is a uSVP instance."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "**Homogenize**. One can just remark (not trivial) that $E(\\mu, \\Sigma)$ is contained in a larger centered ellipsoid.\n",
- "$$E(\\mu, \\Sigma) \\subset E(0, \\Sigma + \\mu \\mu^T)$$\n",
- "So, the transformation is\n",
- "$$(\\Lambda, \\mu, \\Sigma) \\mapsto (\\Lambda, 0, \\Sigma' := \\Sigma + \\mu\\mu^T)$$"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "**Isotropize.** To get an isotropic distribution (*ie* with all its eigenvalues being 1), one can just multiply every element of the lattice with the pseudoinverse of $\\sqrt{\\Sigma'}$. Indeed, one gets a new covariance matrix $\\Sigma'' = (\\sqrt{\\Sigma'}^{-1})^T \\Sigma' \\sqrt{\\Sigma'}^{-1} = \\Pi_{\\Sigma'}^T \\Pi_{\\Sigma'}$. And since $\\Pi_{\\Sigma'} = \\Pi_{\\Sigma'}^T$ and $\\Pi_{\\Sigma'}^2 = \\Pi_{\\Sigma'}$, $\\Sigma''=\\Pi_{\\Sigma'} = \\Pi_\\Lambda$.\n",
- "\n",
- "So, the transformation is\n",
- "$$(\\Lambda, 0, \\Sigma') \\mapsto (M \\cdot \\Lambda, 0, \\Pi_\\Lambda)~~~\\text{ with }~~~ M:=(\\sqrt{\\Sigma'})^{-1}$$\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "After lattice reduction, from the solution $x$ to the $uSVP_{M \\cdot \\Lambda}$ problem, one can derive $x' = M^{-1} x$ the solution to the\n",
- "$DBDD_{\\Lambda,\\mu,\\Sigma}$ problem."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "### Extract of the implementation of the DBDD class\n",
- "def attack(self, beta_max=None, beta_pre=None, randomize=False, tours=1):\n",
- " \"\"\"\n",
- " Run the lattice reduction to solve the DBDD instance.\n",
- " Return the (blocksize, solution) of a succesful attack,\n",
- " or (None, None) on failure\n",
- " \"\"\"\n",
- " # [...]\n",
- " \n",
- " # Apply adequate distortion\n",
- " d = B.nrows()\n",
- " S = self.S + self.mu.T * self.mu\n",
- " L, Linv = square_root_inverse_degen(S, self.B)\n",
- " M = B * Linv\n",
- "\n",
- " # Make the matrix Integral\n",
- " denom = lcm([x.denominator() for x in M.list()])\n",
- " M = matrix(ZZ, M * denom)\n",
- "\n",
- " # Build the BKZ object\n",
- " # [...]\n",
- "\n",
- " u_den = lcm([x.denominator() for x in self.u.list()])\n",
- " \n",
- " # Run BKZ tours with progressively increasing blocksizes\n",
- " for beta in range(beta_pre, B.nrows() + 1):\n",
- " # Apply BKZ\n",
- " # [...]\n",
- " \n",
- " # Recover the tentative solution,\n",
- " # undo distorition, scaling, and test it\n",
- " v = vec(bkz.A[0])\n",
- " v = u_den * v * L / denom\n",
- " solution = matrix(ZZ, v.apply_map(round)) / u_den\n",
- "\n",
- " if not self.check_solution(solution):\n",
- " continue\n",
- "\n",
- " # Success !\n",
- " return beta, solution \n",
- "\n",
- " # Failure...\n",
- " return None, None"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### [Section 3.4] Security estimates of uSVP"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The cost of the primal attack is the minimal of $\\text{cost}_{BKZ}(\\beta, d)$ with $\\beta$ verifing the successful condition !\n",
- "\n",
- "$$\\text{cost}_{attack}(d) = \\min_{\\beta \\text{ s.t } \\sigma \\sqrt{\\beta} \\leq \\delta_0(\\beta)^{2 \\beta - \\text{dim}(\\Lambda) - 1} \\text{Vol}(\\Lambda)^{1/\\text{dim}(\\Lambda)}} \\text{cost}_{BKZ}(\\beta)$$\n",
- "\n",
- "In our case, what is $\\sigma$ ? Thanks to the isotropizing step, the secret has covariance $\\Sigma = I$ (or $\\Sigma = \\Pi_\\Lambda$ if $\\Lambda$ is not of full rank), so here $\\sigma=1$.\n",
- "\n",
- "In this cost, one can remark that $\\beta$ can give a scale of the security of the problem. Instead using bit-size scale, it is possible to use $\\beta$. This point of view has the advantage of abstracting from the BKZ cost model. It is the choice made in the paper. **This choice is possible thanks to the particular structure of the attack.** For example, it is not possible to make the same choice for an analysis of the dual attack. \n",
- "\n",
- "Assuming the Gaussian Heuristic (GH) and Geometric Series Assumption (GSA), a limiting value of root-Hermite factor $\\delta_0$ achievable by BKZ with block size $\\beta$ is $$\\delta_0(\\beta) \\approx \\left ( \\frac{\\beta}{2 \\pi e} (\\pi \\beta)^{\\frac{1}{\\beta}} \\right ) ^ {\\frac{1}{2(\\beta-1)}}$$\n",
- "\n",
- "*Warning*, the Gaussian Heuristic is a good experimental approximation only if $\\beta$ is sufficiently big ($\\beta > 40$)."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "### Extract of the implementation of utils.sage\n",
- "def bkzgsa_gso_len(logvol, i, d, beta=None, delta=None):\n",
- " if delta is None:\n",
- " delta = compute_delta(beta)\n",
- " return RR(delta**(d - 1 - 2 * i) * exp(logvol / d))\n",
- "\n",
- "def compute_beta_delta(d, logvol, tours=1, interpolate=True, probabilistic=False):\n",
- " \"\"\"\n",
- " Computes the beta value for given dimension and volumes\n",
- " It is assumed that the instance has been normalized and sphericized, \n",
- " i.e. that the covariance matrices of the secret is the identity\n",
- " :d: integer\n",
- " :vol: float\n",
- " \"\"\"\n",
- " bbeta = None\n",
- " pprev_margin = None\n",
- "\n",
- " # Keep increasing beta to be sure to catch the second intersection\n",
- " if not probabilistic:\n",
- " for beta in range(2, d):\n",
- " lhs = RR(sqrt(beta))\n",
- " rhs = bkzgsa_gso_len(logvol, d - beta, d, beta=beta)\n",
- "\n",
- " if lhs < rhs and bbeta is None:\n",
- " margin = rhs / lhs\n",
- " prev_margin = pprev_margin\n",
- " bbeta = beta\n",
- "\n",
- " if lhs > rhs:\n",
- " bbeta = None\n",
- " pprev_margin = rhs / lhs\n",
- "\n",
- " if bbeta is None:\n",
- " return 9999, 0\n",
- "\n",
- " ddelta = compute_delta(bbeta) * margin**(1. / d)\n",
- " if prev_margin is not None and interpolate:\n",
- " beta_low = log(margin) / (log(margin) - log(prev_margin))\n",
- " else:\n",
- " beta_low = 0\n",
- " assert beta_low >= 0\n",
- " assert beta_low <= 1\n",
- " return bbeta - beta_low, ddelta\n",
- "\n",
- " else:\n",
- " # [...]\n",
- " return average_beta, ddelta\n",
- "\n",
- "### Extract of the implementation of the DBDD_generic class\n",
- "def estimate_attack(self, probabilistic=False, tours=1, silent=False):\n",
- " \"\"\" Assesses the complexity of the lattice attack on the instance.\n",
- " Return value in Bikz\n",
- " \"\"\"\n",
- " (Bvol, Svol, dvol) = self.volumes()\n",
- " dim_ = self.dim()\n",
- " beta, delta = compute_beta_delta(\n",
- " dim_, dvol, probabilistic=probabilistic, tours=tours)\n",
- "\n",
- " # [...]\n",
- " return (beta, delta)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### [Section 4.1] Perfect Hints"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "**Perfect hint**. A perfect hint on the secret $s$ is the knowledge of $v \\in \\mathbb{Z}^{d-1}$ and $l \\in \\mathbb{Z}$, such that\n",
- "$$\\langle s, v \\rangle = l$$"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Integrating a perfect hint into a Distorted BDD instance $\\mathcal{I} = (\\Lambda, \\mu, \\Sigma)$."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Let $v \\in \\mathbb{Z}^d$ and $l \\in \\mathbb{Z}$ be such that\n",
- "$\\langle s, v \\rangle = l$. Note that the hint can also be written as\n",
- "$$\\langle \\bar{s}, \\bar{v} \\rangle = 0$$\n",
- "\n",
- "where $\\bar{s}=(e, z, 1)$ is the solution of the instance $\\mathcal{I}$ (when $s=(e,z)$) and $\\bar{v} := (v, -l)$."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "With this hint, the lattice $\\Lambda$ of the instance $\\mathcal{I}$ is reduced. Indeed, one keeps only the elements $x$ of the lattice $\\Lambda$ which verify the property $\\langle x, \\bar{v} \\rangle = 0$.\n",
- "\n",
- "$$\\Lambda' = \\Lambda \\cap \\{ x \\in \\mathbb{Z}^d~|~\\langle x, \\bar{v} \\rangle = 0 \\}$$"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "One will interpret Distorted BDD as the promise that the secret $\\bar{s}$ follows a Gaussian distribution of center $\\mu$ and covariance $\\Sigma$, *ie* $\\bar{s} \\sim \\mathcal{D}^{d}_{\\Sigma, \\mu}$.\n",
- "\n",
- "Then, one wants to know the distribution of\n",
- "$$\\bar{s}~|~\\langle \\bar{s}, \\bar{v} \\rangle = 0$$\n",
- "\n",
- "Using *\"Linear transformation of multivariate normal distribution: Marginal, joint and posterior\"* of L.-P. Liu, one directly has $\\bar{s}~|~(\\langle \\bar{s}, \\bar{v} \\rangle = 0) \\sim \\mathcal{D}^{d}_{\\Sigma', \\mu'}$ with\n",
- "\n",
- "$$\\begin{align*}\n",
- " \\mu' & = \\mu - \\frac{\\langle \\bar{v}, \\mu \\rangle}{\\bar{v}^T \\Sigma \\bar{v}}\\Sigma \\bar{v} \\\\\n",
- " \\Sigma' & = \\Sigma - \\frac{\\Sigma \\bar{v} (\\Sigma \\bar{v})^T}{\\bar{v}^T \\Sigma \\bar{v}}\\\\\n",
- "\\end{align*}$$"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "### Extract of the implementation of the DBDD class\n",
- "def integrate_perfect_hint(self, v, l):\n",
- " V = concatenate(v, -l)\n",
- " VS = V * self.S\n",
- " den = scal(VS * V.T)\n",
- "\n",
- " if den == 0:\n",
- " raise RejectedHint(\"Redundant hint\")\n",
- "\n",
- " self.D = lattice_orthogonal_section(self.D, V)\n",
- "\n",
- " num = self.mu * V.T\n",
- " self.mu -= (num / den) * VS\n",
- " num = VS.T * VS\n",
- " self.S -= num / den"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "To **compute a basis of $\\Lambda'$**, here is an algorithm working on the dual lattice.\n",
- "1. Let $D$ be dual basis of $B$. Compute $D_\\bot := \\Pi_\\bar{v}^\\bot \\cdot D$\n",
- "2. Apply LLL algorithm on $D_\\bot$ to eliminate linear dependencies. Then delete the first row of $D'$ (which is $0$ because with the hyperplane intersection, the dimension of the lattice is decremented).\n",
- "3. Output the dual of the resulting matrix."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 22,
- "metadata": {},
- "outputs": [],
- "source": [
- "### Extract of the implementation of geometry.sage\n",
- "def lattice_modular_intersection(D, V, k):\n",
- " V = project_and_eliminate_dep(D, V)\n",
- " r = V.nrows()\n",
- "\n",
- " # Project the dual basis orthogonally to v\n",
- " PV = projection_matrix(V)\n",
- " D = D - D * PV\n",
- "\n",
- " # Eliminate linear dependencies\n",
- " D = D.LLL()\n",
- " D = D[r:]\n",
- "\n",
- " # Go back to the primal\n",
- " return D"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Let show its **correction**. According to *\"Perfect lattices in Euclidean spaces\"* of J. Martinet, for $F$ a subspace of $\\mathbb{R}^d$, one has\n",
- "$$(\\Lambda \\cap F)^* = \\Pi_F \\cdot \\Lambda$$\n",
- "Here, the subspace $F$ is $\\bar{v}^\\bot$, so\n",
- "$$(\\Lambda \\cap \\bar{v}^\\bot)^* = \\Pi_\\bar{v}^\\bot \\cdot \\Lambda$$\n",
- "\n",
- "\n",
- "Let denote $\\Lambda_o$ the output of the algorithm. One has, by step 1, $$\\Lambda_o = (\\Pi_\\bar{v}^\\bot \\cdot \\Lambda)^* = \\left ( (\\Lambda \\cap \\bar{v}^\\bot)^* \\right ) ^* = \\Lambda \\cap \\bar{v}^\\bot = \\Lambda'$$"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "What about the **dimension** and the **volume** of the new lattice ?\n",
- "\n",
- "$$\\begin{align*}\n",
- " \\text{dim}(\\Lambda') & = \\text{dim}(\\Lambda) - 1 \\\\\n",
- " \\text{Vol}(\\Lambda') & = \\|\\bar{v}\\| \\cdot \\text{Vol}(\\Lambda) \\\\\n",
- "\\end{align*}$$\n",
- "\n",
- "The last egality is **verified** when $\\frac{\\bar{v}}{i} \\not \\in \\Lambda^*$ for any $i \\geq 2$ (when $\\bar{v}$ is \"primitive with respect to $\\Lambda^*$\"). This is an assumption **empirically verified** when $v$ has some small coefficients. \n",
- "\n",
- "<!--\n",
- "$$\\langle \\bar{s}, \\frac{\\bar{v}}{i} \\rangle = \\langle s, v \\rangle + \\left ( \\frac{-l}{i} \\right ) = l - \\frac{l}{i} = l \\cdot \\left ( 1 - \\frac{1}{i} \\right ) = l \\cdot \\left ( \\frac{i-1}{i} \\right )$$\n",
- "If there **exists** a $i \\geq 2$ such that $\\langle \\bar{s}, \\frac{\\bar{v}}{i} \\rangle \\in \\mathbb{Z}$, then $i~|~l$. And then, $\\frac{\\bar{v}}{i}$ and $\\bar{v}$ leak the **same information**, so one can applied the property for $\\frac{v}{i}$. \n",
- " -->"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "**When can one obtain a hint like it ?**\n",
- " * Full leak without noise of a original coefficient\n",
- " * Noisy leakage, but with a rather high guessing confidence\n",
- " * Hint 'by design\""
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### [Section 4.2] Modular Hints"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "**Modular hint**. A modular hint on the secret $s$ is the knowledge of $v \\in \\mathbb{Z}^{d-1}$, $k \\in \\mathbb{Z}$ and $l \\in \\mathbb{Z}$, such that\n",
- "$$\\langle s, v \\rangle = l \\text{ mod } k$$"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Integrating a modular hint into a Distorted BDD instance $\\mathcal{I} = (\\Lambda, \\mu, \\Sigma)$."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Let $v \\in \\mathbb{Z}^{d-1}$, $k \\in \\mathbb{Z}$ and $l \\in \\mathbb{Z}$ be such that\n",
- "$\\langle s, v \\rangle = l \\text{ mod } k$. Note that the hint can also be written as\n",
- "$$\\langle \\bar{s}, \\bar{v} \\rangle = 0 \\text{ mod } k$$\n",
- "\n",
- "where $\\bar{s}=(e, z, 1)$ is the solution of the instance $\\mathcal{I}$ (when $s=(e,z)$) and $\\bar{v} := (v, -l)$."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Intuitively, such a hint should only **sparsify** the lattice, and leave the average and the variance\n",
- "**unchanged**. This is not entirely true, this is only (approximately) true when the variance is\n",
- "sufficiently large in the direction of $\\bar{v}$ to ensure smoothness, *ie* when $k^2 << \\bar{v}^T \\Sigma \\bar{v}$."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "So, in this **smooth case**, we therefore have\n",
- "$$\\begin{align*}\n",
- " \\Lambda' & = \\Lambda \\cap \\{ x \\in \\mathbb{Z}^d~|~\\langle x, \\bar{v} \\rangle = 0 \\text{ mod } k \\} \\\\\n",
- " \\mu' & = \\mu \\\\\n",
- " \\Sigma' & = \\Sigma \\\\\n",
- "\\end{align*}$$"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "To compute a basis of $\\Lambda'$, here is an algorithm working on the dual lattice.\n",
- "1. Let $D$ be dual basis of $B$.\n",
- "2. Redefine $\\bar{v} \\leftarrow \\Pi_\\Lambda \\cdot \\bar{v}$, noting that this does not affect the validity of the hint.\n",
- "3. Append $\\frac{\\bar{v}}{k}$ to $D$ and obtain $D'$\n",
- "4. Apply LLL algorithm on $D'$ to eliminate linear dependencies. Then delete the first row of $D'$ (which is $0$ since we introduced a linear dependency).\n",
- "5. Output the dual of the resulting matrix."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 3,
- "metadata": {},
- "outputs": [],
- "source": [
- "### Extract of the implementation of geometry.sage\n",
- "def lattice_modular_intersection(D, V, k):\n",
- " # Project v on Span(B)\n",
- " V = project_and_eliminate_dep(D, V)\n",
- " r = V.nrows()\n",
- "\n",
- " # append the equation in the dual\n",
- " V /= k\n",
- " # D = dual_basis(B)\n",
- " D = D.stack(V)\n",
- "\n",
- " # Eliminate linear dependencies\n",
- " D = D.LLL()\n",
- " D = D[r:]\n",
- "\n",
- " # Go back to the primal\n",
- " return D"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Let show its **correction**. Let denote $\\Lambda_o$ its output. One has, by step 3, $$\\text{Span}(D') = \\text{Span}(D) + \\text{Span}(\\frac{\\bar{v}}{k})$$\n",
- "So, $$\\Lambda_o^* = \\Lambda^* + \\frac{\\bar{v}}{k}\\mathbb{Z}$$\n",
- "And so, $$\\begin{align*}\n",
- "\\Lambda_o & = \\left ( \\Lambda^* + \\frac{\\bar{v}}{k}\\mathbb{Z} \\right ) ^* = \\left ( \\Lambda^* \\right ) ^* \\cap \\left ( \\frac{\\bar{v}}{k}\\mathbb{Z} \\right )^* \\\\\n",
- "& = \\Lambda \\cap \\{ x \\in \\mathbb{Z}^d~|~\\forall l\\in\\mathbb{Z}, \\langle x, \\frac{\\bar{v}}{k} l \\rangle \\in \\mathbb{Z} \\} \\\\\n",
- "& = \\Lambda \\cap \\{ x \\in \\mathbb{Z}^d~|~\\langle x, \\frac{\\bar{v}}{k} \\rangle \\in \\mathbb{Z} \\} \\\\\n",
- "& = \\Lambda \\cap \\{ x \\in \\mathbb{Z}^d~|~\\langle x, \\bar{v} \\rangle = 0 \\text{ mod } k \\} = \\Lambda' \\\\\n",
- "\\end{align*}$$"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "What about the **dimension** and the **volume** of the new lattice ?\n",
- "\n",
- "$$\\begin{align*}\n",
- " \\text{dim}(\\Lambda') & = \\text{dim}(\\Lambda) \\\\\n",
- " \\text{Vol}(\\Lambda') & = k \\cdot \\text{Vol}(\\Lambda) \\\\\n",
- "\\end{align*}$$\n",
- "\n",
- "The last egality is **verified** under a primitivity condition, which is **empirically verified** when $v$ has some small coefficients. "
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "**And, what about the case where it is not smooth ?**\n",
- "\n",
- " * If $k^2 >> v^T \\Sigma v$, one can approximate this hint by a **perfect hint** with $\\langle s, v \\rangle = l + ki$ for some $i$.\n",
- " * If $k^2 \\approx v^T \\Sigma v$, one can still apply the formulae of the arbitrary hints. One can numerically compute the average $\\mu_c$ and the variance $\\sigma_c^2$ of $\\langle \\bar{s}, \\bar{v} \\rangle$, *ie* compute the average $\\mu_c$ and the variance $\\sigma_c^2$ of the one-dimensional discrete Gaussian distribution of average $\\langle \\mu, \\bar{v} \\rangle$ and variance $\\sigma^2 = v^T \\Sigma v$ modulus $k$."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "**When can one obtain a hint like it ?**\n",
- " * Leakage of the form $a=|s|$, because it implies $s=a \\text{ mod } 2a$ (obtained by a timing attack).\n",
- " * Leakage Modulus $q$"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### [Section 4.3] Approximate Hints (conditioning)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "**Approximate hint**. A approximate hint on the secret $s$ is the knowledge of $v \\in \\mathbb{Z}^{d-1}$ and $l \\in \\mathbb{Z}$, such that\n",
- "$$\\langle s, v \\rangle + e = l$$\n",
- "where $e$ models noise following a distribution $\\mathcal{D}_{\\sigma_e^2, 0}^1$, independent of $s$."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Integrating an approximate hint into a Distorted BDD instance $\\mathcal{I} = (\\Lambda, \\mu, \\Sigma)$."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Let $v \\in \\mathbb{Z}^d$ and $l \\in \\mathbb{Z}$ be such that\n",
- "$\\langle s, v \\rangle \\approx l$. Note that the hint can also be written as\n",
- "$$\\langle \\bar{s}, \\bar{v} \\rangle + e = 0$$\n",
- "\n",
- "where $\\bar{s}=(e, z, 1)$ is the solution of the instance $\\mathcal{I}$ (when $s=(e,z)$), $\\bar{v} := (v, -l)$ and $e$ follows $\\mathcal{D}_{\\sigma_e^2, 0}^1$ distribution."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "With this hint, the lattice $\\Lambda$ of the instance $\\mathcal{I}$ doesn't change, because the hint is approximate and so no possibility is strictly rejected !\n",
- "\n",
- "$$\\Lambda' = \\Lambda$$"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "As before, one will interpret Distorted BDD as the promise that the secret $s$ follows a Gaussian distribution of center $\\mu$ and covariance $\\Sigma$, *ie* $\\bar{s} \\sim \\mathcal{D}^{d}_{\\Sigma, \\mu}$.\n",
- "\n",
- "Then, one wants to know the distribution of\n",
- "$$\\bar{s}~|~\\langle \\bar{s}, \\bar{v} \\rangle + e = 0$$\n",
- "\n",
- "Using *\"Linear transformation of multivariate normal distribution: Marginal, joint and posterior\"* of L.-P. Liu, one directly has $\\bar{s}~|~(\\langle \\bar{s}, \\bar{v} \\rangle + e = 0) \\sim \\mathcal{D}^{d}_{\\Sigma', \\mu'}$ with\n",
- "\n",
- "$$\\begin{align*}\n",
- " \\mu' & = \\mu - \\frac{\\langle \\bar{v}, \\mu \\rangle}{\\bar{v}^T \\Sigma \\bar{v} + \\sigma_e^2}\\Sigma \\bar{v} \\\\\n",
- " \\Sigma' & = \\Sigma - \\frac{\\Sigma \\bar{v} (\\Sigma \\bar{v})^T}{\\bar{v}^T \\Sigma \\bar{v} + \\sigma_e^2}\\\\\n",
- "\\end{align*}$$"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Remark:\n",
- " * If $\\sigma_e = 0$, one falls back to a perfect hint $\\langle s, v \\rangle = l$."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "**When can one obtain a hint like it ?**\n",
- " * Any noisy side channel information about a secret coefficient.\n",
- " * Decryption failures."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### [Section 4.4] Approximate Hints (*a posteriori*)"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "**Approximate hint (*a posteriori*)**. An approximate hint (*a posteriori*) on the secret $s$ is the knowledge of the distribution of $\\langle s, v \\rangle$."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Integrating an approximate hint (*a posteriori*) into a Distorted BDD instance $\\mathcal{I} = (\\Lambda, \\mu, \\Sigma)$."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Let $\\mathcal{P}$ the distribution of $\\langle s, v \\rangle$. With $\\mathcal{P}$, the hint can also be written as the distribution $\\mathcal{P}_i$ of $\\langle \\bar{s}, \\bar{v} \\rangle$ where $\\bar{s}=(e, z, 1)$ is the solution of the instance $\\mathcal{I}$ (when $s=(e,z)$) and $\\bar{v} := (v, -i)$, for some $i$.\n",
- "\n",
- "Let denote $\\mu_i$ the average of $\\mathcal{P}_i$ and $\\sigma_i^2$ its variance."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "This hint gives the known distribution on the direction of $\\frac{\\bar{v}}{\\|\\bar{v}\\|}$.\n",
- "\n",
- "$$\\begin{align*}\n",
- " \\mu' &= \\hat{\\Pi}_{\\bar{v}}^\\bot \\mu + \\mu_{ap} \\cdot \\frac{\\Sigma \\bar{v}}{\\bar{v}^T \\Sigma \\bar{v}} \\\\\n",
- " \\Sigma' &= \\hat{\\Pi}_{\\bar{v}}^\\bot \\cdot \\Sigma \\cdot (\\hat{\\Pi}_{\\bar{v}}^\\bot)^T + \\sigma_{ap}^2 \\cdot \\frac{(\\Sigma \\bar{v}) (\\Sigma \\bar{v})^T}{(\\bar{v}^T \\Sigma \\bar{v})^2}\n",
- "\\end{align*}$$"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "where $$\\hat{\\Pi}_{\\bar{v}} := \\frac{\\Sigma \\bar{v} \\bar{v}^T}{\\bar{v}^T \\Sigma \\bar{v}}$$"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The idea of the pseudo-projection $\\hat{\\Pi}_{\\bar{v}}^\\bot$ is to **remove the contribution of $\\bar{v}$ in the average $\\mu$ and the covariance matrix $\\Sigma$**, in order to add a new contribution after."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The lattice $\\Lambda$ of the instance $\\mathcal{I}$ generally doesn't change, because no possibility is strictly rejected !\n",
- "\n",
- "$$\\Lambda' = \\Lambda$$"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "**Warning**. *This is an approximation. As said in a previous remark, the Distorted BDD can be seen as a multivariate Gaussian distribution of center $\\mu$ and covariance $\\Sigma$. Here, the distribution $\\mathcal{P}_i$ may not be a Gaussian distribution, but one makes this approximation.*"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Remark:\n",
- " * If $\\mu_{ap}=0$ and $\\sigma_{ap} = 0$, one falls back to a perfect hint $\\langle s, v \\rangle = i$.\n",
- " * **Non-smooth case in modular hints**. If $k^2 \\approx v^T \\Sigma v$, one can still apply the formulae of the approximate hints (*a posteriori*). One can numerically compute the average $\\mu_{ap}$ and the variance $\\sigma_{ap}^2$ of $\\langle \\bar{s}, \\bar{v} \\rangle$, *ie* compute the average $\\mu_{ap}$ and the variance $\\sigma_{ap}^2$ of the one-dimensional discrete Gaussian distribution of average $\\langle \\mu, \\bar{v} \\rangle$ and variance $\\sigma^2 = v^T \\Sigma v$ modulus $k$."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "**When can one obtain a hint like it ?**\n",
- " * Hint from template attack (like the attack against Frodo)\n"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### [Most general case] Noised *A Posteriori* Approximate Hints"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "**Noised *A Posteriori* Approximate hint**. Such a hint on the secret $s$ is the knowledge of the distribution of $$\\langle s, v \\rangle+e$$\n",
- "where $e$ models noise following a distribution $\\mathcal{D}_{\\sigma_e^2, 0}^1$, independent of $s$."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Integrating such a hint into a Distorted BDD instance $\\mathcal{I} = (\\Lambda, \\mu, \\Sigma)$.\n",
- "\n",
- "Let $\\mathcal{P}$ the distribution of $\\langle s, v \\rangle$. With $\\mathcal{P}$, the hint can also be written as the distribution $\\mathcal{P}_i$ of $\\langle \\bar{s}, \\bar{v} \\rangle+e$ where $\\bar{v} := (v, -i)$, for some $i$.\n",
- "\n",
- "Let denote $\\mu_i$ the average of $\\mathcal{P}_i$ and $\\sigma_i^2$ its variance. Let denote $\\Sigma_e^2$ the variance of the independent noise distribution.\n",
- "\n",
- "This hint gives the known distribution on the direction of $\\frac{\\bar{v}}{\\|\\bar{v}\\|}$. One can transform $\\mathcal{I}$ into $\\mathcal{I}'=(\\Lambda, \\mu', \\Sigma')$ with\n",
- "\n",
- "$$\\begin{align*}\n",
- " \\mu' &= \\hat{\\Pi}_{\\bar{v}}^\\bot \\mu + \\mu_{ap} \\cdot \\frac{\\Sigma \\bar{v}}{\\bar{v}^T \\Sigma \\bar{v} + \\sigma_e^2} \\\\\n",
- " \\Sigma' &= \\hat{\\Pi}_{\\bar{v}}^\\bot \\cdot \\Sigma \\cdot (\\hat{\\Pi}_{\\bar{v}}^\\bot)^T + (\\sigma_{ap}^2 + \\sigma_e^2) \\cdot \\frac{(\\Sigma \\bar{v}) (\\Sigma \\bar{v})^T}{(\\bar{v}^T \\Sigma \\bar{v}+\\sigma_e^2)^2}\n",
- "\\end{align*}$$\n",
- "\n",
- "where $$\\hat{\\Pi}_{\\bar{v}} := \\frac{\\Sigma \\bar{v} \\bar{v}^T}{\\bar{v}^T \\Sigma \\bar{v}}$$"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "The lattice $\\Lambda$ of the instance $\\mathcal{I}$ generally doesn't change, because no possibility is strictly rejected !\n",
- "\n",
- "$$\\Lambda' = \\Lambda$$"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "This case gathers all the previous cases (except for the modular case, which is a special case):\n",
- " * If $\\mu_{ap}=0$, $\\sigma_{ap}^2=0$ and $\\sigma_e^2=0$, one falls back to a perfect hint.\n",
- " * It $\\mu_{ap}=0$, $\\sigma_{ap}^2=0$, one falls back to an approximate hint (conditioning).\n",
- " * If $\\sigma_e^2=0$, one falls back to an approximate hint (*a posteriori*)."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "### [Section 4.5] Short vector hints"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "In the original primal attack, one **removes some equations in the problem to improve the result of the attack**. Indeed, if there are too many equations, the lattice becomes too complicated and it is difficult to find a short vector. But if there are not enough equations, the lattice will be too sparse."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "Here, one didn't remove equations in the transformation of the LWE problem into a Distorted BDD. One cannot do it before integrating hints, because removing equations changes the elements of the lattice (it is a projection onto a hyperplane). One must do it **after integrating all the hints**."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "How to remove a equation ? One chooses a vector $\\bar{v}$, and one does **the projection of the lattice on $\\bar{v}^\\bot$, the hyperplane orthogonal to $\\bar{v}$**."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "How to choose this vector $\\bar{v}$ ? Can we take a random vector of $\\Lambda$ ? **The answer is \"no\"**. Indeed, the secret vector must remain the smallest vector of the lattice."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "**Fact: If one chooses $\\bar{v}$ as a short vector (*ie* $\\|v\\| \\approx \\|s\\|$), then it will be almost orthorgonal to the other shorts vectors (including $\\bar{s}$), and so, the norms of these vectors are not too affected.**"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "*Proof of the fact* :\n",
- "\n",
- "Let denote $\\bar{s}$ the smallest vector of the lattice, and $\\bar{v}$ another vector ($\\bar{v} \\not \\in \\text{Span}(\\bar{s})$). There exists $y \\in \\bar{s}^\\bot$ such that $\\bar{v} = \\alpha \\cdot \\bar{s} + y$, with $\\alpha = \\frac{\\langle \\bar{s}, \\bar{v} \\rangle}{\\langle \\bar{s}, \\bar{s}\\rangle}$. Because $\\bar{s}$ is the smallest vector of the lattice, for $x\\in\\mathcal{R}$, one has\n",
- "$$\\begin{align*}\n",
- "\\|s\\|^2 & \\leq \\|s - \\lfloor x \\rceil \\cdot v \\|^2 \\\\\n",
- " & = \\| ( 1 - \\alpha \\lfloor x \\rceil ) \\cdot s - \\lfloor x \\rceil \\cdot y \\|^2 \\\\\n",
- " & = ( 1 - \\alpha \\lfloor x \\rceil )^2 \\|s\\|^2 + \\lfloor x \\rceil^2 \\cdot \\|y\\|^2 \\\\\n",
- " & = \\lfloor x \\rceil^2 \\cdot (\\|y\\|^2 + \\alpha^2 \\|s\\|^2) - \\lfloor x \\rceil \\cdot (2 \\alpha \\|s\\|^2) + \\|s\\|^2 \\\\\n",
- " & = \\lfloor x \\rceil^2 \\cdot \\|v\\|^2 - \\lfloor x \\rceil \\cdot (2 \\alpha \\|s\\|^2) + \\|s\\|^2\n",
- "\\end{align*}$$\n",
- "So,\n",
- "$$0 \\leq \\|v\\|^2 \\cdot \\lfloor x \\rceil^2 - (2 \\alpha \\|s\\|^2) \\cdot \\lfloor x \\rceil $$\n",
- "\n",
- "Let choose $x:=\\alpha \\frac{\\|s\\|^2}{\\|v\\|^2} = \\frac{\\langle \\bar{s}, \\bar{v} \\rangle}{\\langle \\bar{v}, \\bar{v}\\rangle}$, then\n",
- "$$0 \\leq \\lfloor x \\rceil^2 - 2 x \\cdot \\lfloor x \\rceil$$\n",
- "\n",
- "And so (after studying the cases where $x \\geq 0$ and where $x \\leq 0$, one has\n",
- "$$|x| \\leq \\frac{1}{2}$$\n",
- "\n",
- "The angle $\\lambda$ betwen $s$ and $v$ verifies $$\\cos(\\lambda) = \\frac{\\left \\| \\frac{\\langle s, v \\rangle}{\\langle v, v \\rangle} \\cdot v \\right \\|}{\\|s\\|} = \\frac{\\left \\| x \\cdot v \\right \\|}{\\|s\\|} = |x| \\frac{\\|v\\|}{\\|s\\|}$$ \n",
- "\n",
- "So, $$\\cos(\\lambda) \\leq \\frac{\\|v\\|}{2\\|s\\|}$$\n",
- "\n",
- "And then, if $s$ is projected into $\\bar{v}^\\bot$, one has\n",
- "$$\\|\\Pi_{\\bar{v}}^\\bot \\bar{s}\\| = \\|\\bar{s}\\| \\cdot \\sqrt{1-\\cos^2(\\lambda)} \\geq \\|\\bar{s}\\| \\cdot \\sqrt{1-\\frac{\\|v\\|^2}{4 \\|s\\|^2}}$$\n",
- "\n",
- "If $v$ is a short vector (*ie* $\\|v\\| \\approx \\|s\\|$), $$\\lambda_\\text{min} = \\frac{\\pi}{3} \\hspace{1cm} \\text{ and } \\hspace{1cm} \\|\\Pi_{\\bar{v}}^\\bot \\bar{s}\\| \\geq \\frac{\\sqrt{3}}{2} \\|\\bar{s}\\| \\approx 0.866 \\cdot \\|\\bar{s}\\| $$"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Integrating a short vector $\\bar{v}$ hint into a Distorted BDD instance $\\mathcal{I} = (\\Lambda, \\mu, \\Sigma)$."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "One simply makes a projection of the lattice.\n",
- "\n",
- "$$\\begin{align*}\n",
- " \\Lambda' & = \\Pi_\\bar{v}^\\bot \\cdot \\Lambda \\\\\n",
- " \\mu' & = \\Pi_\\bar{v}^\\bot \\cdot \\mu \\\\\n",
- " \\Sigma' & = \\Pi_\\bar{v}^\\bot \\cdot \\Sigma \\cdot (\\Pi_\\bar{v}^\\bot)^T \\\\\n",
- "\\end{align*}$$\n",
- "\n",
- "To compute a basis of $\\Lambda$, one realises the projection of the basis of $\\Lambda$ and one uses the LLL algorithm to remove linear dependencies."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "$$\\begin{align*}\n",
- " \\dim(\\Lambda') &= \\dim(\\Lambda) - 1 \\\\\n",
- " \\text{Vol}(\\Lambda') &= \\frac{1}{v^T v} \\text{Vol}(\\Lambda)\n",
- "\\end{align*}$$\n",
- "\n",
- "The second equality is up to a primitivity condition ($\\bar{v}$ must be a primitive vector in respect to $\\Lambda$), which be often verified because $\\bar{v}$ is small."
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "**Some experiments**"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 10,
- "metadata": {},
- "outputs": [],
- "source": [
- "load(\"../framework/instance_gen.sage\")\n",
- "# NIST1 FRODOKEM-640\n",
- "n = 640\n",
- "m = 640\n",
- "q = 2**15\n",
- "frodo_distribution = [9288, 8720, 7216, 5264, 3384,\n",
- " 1918, 958, 422, 164, 56, 17, 4, 1]\n",
- "D_s = get_distribution_from_table(frodo_distribution, 2 ** 16)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 13,
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "263.31223544175333"
- ]
- },
- "execution_count": 13,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "sqrt((n+m)*variance(D_s))"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 15,
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "32768"
- ]
- },
- "execution_count": 15,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "q"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "## Application to Frodo"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 1,
- "metadata": {},
- "outputs": [],
- "source": [
- "load(\"../framework/instance_gen.sage\")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### NIST1 FRODOKEM-640"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 2,
- "metadata": {},
- "outputs": [],
- "source": [
- "# NIST1 FRODOKEM-640\n",
- "n = 640\n",
- "m = 640\n",
- "q = 2**15\n",
- "frodo_distribution = [9288, 8720, 7216, 5264, 3384,\n",
- " 1918, 958, 422, 164, 56, 17, 4, 1]\n",
- "D_s = get_distribution_from_table(frodo_distribution, 2 ** 16)\n",
- "load(\"Frodo_Single_data/simulation_distribution_NIST1.sage\")\n",
- "load(\"Frodo_Single_data/aposteriori_distribution_NIST1.sage\")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Original Security"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 12,
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "\u001b[4;37m Attack without hints: 487.00 bikz (129.06 bits) \u001b[0m\n"
- ]
- }
- ],
- "source": [
- "A, b, dbdd = initialize_from_LWE_instance(DBDD_predict_diag,\n",
- " n,\n",
- " q, m, D_s, D_s, verbosity=0)\n",
- "dbdd.integrate_q_vectors(q, indices=range(n, n + m))\n",
- "(beta, _) = dbdd.estimate_attack()\n",
- "logging(\"Attack without hints: %3.2f bikz (%3.2f bits)\" % (beta, 0.265*beta), style=\"HEADER\")"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Refined Side channel attack"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "$$L = \\{-11, ..., 11\\}$$"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "**First phase.** One first learns a posteriori distributions from the experimental data of [11] *\"Assessing the feasibility of single trace power analysis of Frodo.\"* of J. W. Bos, S. Friedberger, M. Martinoli, E. Oswald, and M. Stam. The score table for a secret coefficient $s_i$ is denoted $S_i$.\n",
- "\n",
- "\n",
- "\n",
- "One can then derive the a posteriori probability distribution of the secret *knowing the value of the best guess*. Let $x,g \\in L$ and $0 \\leq i \\leq n - 1$,\n",
- "\n",
- "$$D^{apost}_{i,g}(x) := \\mathbb{P}[s_i = x~|~ \\text{bestguess}(S_i) = g]$$\n",
- "\n",
- "and more precisely its center denoted $\\mu^{apost}_{i,g}$ and its variance denoted $v^{apost}_{i,g}$.\n",
- "\n",
- "Similarly as the authors of [11], one makes an independence assumption. One assumes that the obtained score only depends on the secret coefficient $x \\in L$ and the obtained best guess $g \\in L$. One then omits the index $i$ and denote for $x, g \\in L$,\n",
- "\n",
- "$$D^{apost}_{g}(x) := \\mathbb{P}[s_0 = x~|~ \\text{bestguess}(S_0) = g]$$ \n",
- "\n",
- "with $\\mu^{apost}_g$ its center and $v^{apost}_g$ its variance.\n",
- "\n",
- ""
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "**Second phase.**\n",
- "\n",
- "Let $0 \\leq i \\leq n-1$, $v_i$ be the $i$-th canonical\n",
- "vector and $g_i$ be the best guess associated to the $i$-th secret coefficient. The approximate hint is\n",
- "\n",
- "$$\\langle s, v_i \\rangle = \\mu^{apost}_{g_i} + e$$\n",
- "\n",
- "where $e$ models the noise following a distribution $N_1(0, v^{apost}_{g_i})$ where $\\mu^{apost}_{g_i}$ and $v^{apost}_{g_i}$ have been computed in phase one."
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 25,
- "metadata": {},
- "outputs": [],
- "source": [
- "def simu_measured(secret):\n",
- " \"\"\"\n",
- " This fonction simulates the information gained by\n",
- " Bos et al attack. The simulation is based on a\n",
- " distribution obtained with a large amount of data\n",
- " for Bos et al suite (in Matlab).\n",
- " :secret: an integer being the secret value\n",
- " :measurement: an integer that represents the output\n",
- " of Bos et al attack.\n",
- " \"\"\"\n",
- " secret = recenter(secret)\n",
- " distrib_of_guesses = renormalize_dist(Dguess[secret])\n",
- " measurement = draw_from_distribution(distrib_of_guesses)\n",
- " return measurement"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 26,
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "\u001b[4;37m Build DBDD from LWE \u001b[0m\n",
- "\u001b[1;33m n=640 \t m=640 \t q=32768 \u001b[0m\n"
- ]
- }
- ],
- "source": [
- "A, b, dbdd = initialize_from_LWE_instance(DBDD_predict_diag,\n",
- " n,\n",
- " q, m, D_s, D_s, verbosity=2)\n",
- "measured = [simu_measured(dbdd.u[0, i]) for i in range(n)]\n",
- "report_every = 50"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 27,
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "\u001b[0m [...50] \u001b[0m \u001b[1;37m integrate approx hint \u001b[0m \u001b[0m (aposteriori) \u001b[0m \u001b[3;34m u0 = 5.36982968369830 + χ(σ²=0.311) \u001b[0m \u001b[3;32m \t Worthy hint ! \u001b[0m \u001b[1;33m dim=1281, δ=1.00346938, β=486.83 \u001b[0m\n",
- "\u001b[0m [...50] \u001b[0m \u001b[1;37m integrate approx hint \u001b[0m \u001b[0m (aposteriori) \u001b[0m \u001b[3;34m u50 = 1.755049101352603 + χ(σ²=0.765) \u001b[0m \u001b[3;32m \t Worthy hint ! \u001b[0m \u001b[1;33m dim=1272, δ=1.00354349, β=472.57 \u001b[0m\n",
- "\u001b[0m [...50] \u001b[0m \u001b[1;37m integrate perfect hint \u001b[0m \u001b[3;34m u100 = 0.000000000000000 \u001b[0m \u001b[3;32m \t Worthy hint ! \u001b[0m \u001b[1;33m dim=1267, δ=1.00360062, β=462.07 \u001b[0m\n",
- "\u001b[0m [...50] \u001b[0m \u001b[1;37m integrate approx hint \u001b[0m \u001b[0m (aposteriori) \u001b[0m \u001b[3;34m u150 = 3.069447793585725 + χ(σ²=0.444) \u001b[0m \u001b[3;32m \t Worthy hint ! \u001b[0m \u001b[1;33m dim=1261, δ=1.00366333, β=450.78 \u001b[0m\n",
- "\u001b[0m [...50] \u001b[0m \u001b[1;37m integrate approx hint \u001b[0m \u001b[0m (aposteriori) \u001b[0m \u001b[3;34m u200 = -1.515323025952966 + χ(σ²=1.992) \u001b[0m \u001b[3;32m \t Worthy hint ! \u001b[0m \u001b[1;33m dim=1253, δ=1.00373639, β=438.35 \u001b[0m\n",
- "\u001b[0m [...50] \u001b[0m \u001b[1;37m integrate approx hint \u001b[0m \u001b[0m (aposteriori) \u001b[0m \u001b[3;34m u250 = -1.515323025952966 + χ(σ²=1.992) \u001b[0m \u001b[3;32m \t Worthy hint ! \u001b[0m \u001b[1;33m dim=1247, δ=1.00380231, β=427.52 \u001b[0m\n",
- "\u001b[0m [...50] \u001b[0m \u001b[1;37m integrate approx hint \u001b[0m \u001b[0m (aposteriori) \u001b[0m \u001b[3;34m u300 = -1.515323025952966 + χ(σ²=1.992) \u001b[0m \u001b[3;32m \t Worthy hint ! \u001b[0m \u001b[1;33m dim=1242, δ=1.00386479, β=417.64 \u001b[0m\n",
- "\u001b[0m [...50] \u001b[0m \u001b[1;37m integrate perfect hint \u001b[0m \u001b[3;34m u350 = 0.000000000000000 \u001b[0m \u001b[3;32m \t Worthy hint ! \u001b[0m \u001b[1;33m dim=1236, δ=1.00393211, β=407.43 \u001b[0m\n",
- "\u001b[0m [...50] \u001b[0m \u001b[1;37m integrate approx hint \u001b[0m \u001b[0m (aposteriori) \u001b[0m \u001b[3;34m u400 = 4.090359168241965 + χ(σ²=0.159) \u001b[0m \u001b[3;32m \t Worthy hint ! \u001b[0m \u001b[1;33m dim=1229, δ=1.00400727, β=396.46 \u001b[0m\n",
- "\u001b[0m [...50] \u001b[0m \u001b[1;37m integrate approx hint \u001b[0m \u001b[0m (aposteriori) \u001b[0m \u001b[3;34m u450 = -1.515323025952966 + χ(σ²=1.992) \u001b[0m \u001b[3;32m \t Worthy hint ! \u001b[0m \u001b[1;33m dim=1223, δ=1.00407544, β=386.85 \u001b[0m\n",
- "\u001b[0m [...50] \u001b[0m \u001b[1;37m integrate perfect hint \u001b[0m \u001b[3;34m u500 = 0.000000000000000 \u001b[0m \u001b[3;32m \t Worthy hint ! \u001b[0m \u001b[1;33m dim=1212, δ=1.00417650, β=373.40 \u001b[0m\n",
- "\u001b[0m [...50] \u001b[0m \u001b[1;37m integrate approx hint \u001b[0m \u001b[0m (aposteriori) \u001b[0m \u001b[3;34m u550 = -2.661084529507207 + χ(σ²=0.837) \u001b[0m \u001b[3;32m \t Worthy hint ! \u001b[0m \u001b[1;33m dim=1202, δ=1.00427359, β=361.13 \u001b[0m\n",
- "\u001b[0m [...50] \u001b[0m \u001b[1;37m integrate perfect hint \u001b[0m \u001b[3;34m u600 = 0.000000000000000 \u001b[0m \u001b[3;32m \t Worthy hint ! \u001b[0m \u001b[1;33m dim=1194, δ=1.00436246, β=350.37 \u001b[0m\n",
- "\u001b[0m [...50] \u001b[0m \u001b[1;37m integrate approx hint \u001b[0m \u001b[0m (aposteriori) \u001b[0m \u001b[3;34m u639 = -1.515323025952966 + χ(σ²=1.992) \u001b[0m \u001b[3;32m \t Worthy hint ! \u001b[0m \u001b[1;33m dim=1192, δ=1.00440147, β=345.77 \u001b[0m\n",
- "\u001b[4;37m Integrating q-vectors \u001b[0m\n",
- "\u001b[0m [...50] \u001b[0m \u001b[1;37m integrate short vector hint \u001b[0m \u001b[3;34m 32768*c1279 ∈ Λ \u001b[0m \u001b[3;32m \t Worthy hint ! \u001b[0m \u001b[1;33m dim=1191, δ=1.00440217, β=345.69 \u001b[0m\n",
- "\u001b[0m [...50] \u001b[0m \u001b[1;37m integrate short vector hint \u001b[0m \u001b[3;34m 32768*c1229 ∈ Λ \u001b[0m \u001b[3;32m \t Worthy hint ! \u001b[0m \u001b[1;33m dim=1141, δ=1.00442824, β=342.70 \u001b[0m\n",
- "\u001b[0m [...50] \u001b[0m \u001b[1;37m integrate short vector hint \u001b[0m \u001b[3;34m 32768*c1179 ∈ Λ \u001b[0m \u001b[3;32m \t Worthy hint ! \u001b[0m \u001b[1;33m dim=1091, δ=1.00444356, β=341.06 \u001b[0m\n",
- "\u001b[4;37m Attack Estimation \u001b[0m\n",
- "\u001b[3;34m ln(dvol)=4938.8191193 \t ln(Bvol)=5354.5619698 \t ln(Svol)=831.4857011 \tδ(β)=100000000000000000000.000000 \u001b[0m\n",
- "\u001b[1;33m dim=1067 \t δ=1.004444 \t β=340.85 \u001b[0m\n",
- "\u001b[0m \u001b[0m\n"
- ]
- }
- ],
- "source": [
- "# Integrate the leaked informations\n",
- "Id = identity_matrix(n + m)\n",
- "for i in range(n):\n",
- " v = vec(Id[i])\n",
- " \n",
- " # Log information for user\n",
- " if report_every is not None and ((i % report_every == 0) or (i == n - 1)) :\n",
- " verbose = 2 \n",
- " else:\n",
- " verbose = 0\n",
- " dbdd.verbosity = verbose\n",
- " if verbose == 2:\n",
- " logging(\"[...%d]\" % report_every, newline=False)\n",
- " \n",
- " # Integrate the hint as a perfect or an approximate hints\n",
- " if variance_aposteriori[measured[i]] is not None and variance_aposteriori[measured[i]] != 0:\n",
- " dbdd.integrate_approx_hint(v,\n",
- " center_aposteriori[measured[i]],\n",
- " variance_aposteriori[measured[i]],\n",
- " aposteriori=True, estimate=verbose)\n",
- " elif variance_aposteriori[measured[i]] is not None and variance_aposteriori[measured[i]] == 0 :\n",
- " dbdd.integrate_perfect_hint(v, center_aposteriori[measured[i]],\n",
- " estimate=verbose)\n",
- "\n",
- " \n",
- "# Integrate the known short verctors\n",
- "if report_every is not None:\n",
- " dbdd.integrate_q_vectors(q, indices=range(n, n + m), report_every=report_every)\n",
- "else:\n",
- " dbdd.integrate_q_vectors(q, indices=range(n, n + m))\n",
- "\n",
- "# Estimate the attack\n",
- "(beta, _) = dbdd.estimate_attack()"
- ]
- },
- {
- "cell_type": "markdown",
- "metadata": {},
- "source": [
- "#### Refined Side channel attack (with guesses)"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 28,
- "metadata": {},
- "outputs": [],
- "source": [
- "def ordered_indices(sorted_guesses, measured):\n",
- " \"\"\"\n",
- " Necessary for the bruteforce attack, this function\n",
- " sorts the indices of the coefficients\n",
- " of the secret with decreasing likelihood.\n",
- " :sorted_guess: the best guesses in order of likelihood\n",
- " :measured: the measurement for each coefficient\n",
- " :orderered_coefficients: the indices of the coefficients\n",
- " ordered according to Probability[secret[i] = measured[i]]\n",
- " \"\"\"\n",
- " orderered_coefficients = []\n",
- " for x in sorted_guesses:\n",
- " orderered_coefficients += [i for i, meas in enumerate(measured) if meas == x]\n",
- " return orderered_coefficients"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 29,
- "metadata": {},
- "outputs": [
- {
- "name": "stdout",
- "output_type": "stream",
- "text": [
- "\u001b[4;37m Hybrid attack estimation \u001b[0m\n",
- "\u001b[0m [...50] \u001b[0m \u001b[1;33m dim=979 \t delta=1.004820 \t beta=302.25 \t guesses= 50 \u001b[0m \u001b[1;33m Proba success = 0.545250948149619 \u001b[0m\n",
- "\u001b[0m [...50] \u001b[0m \u001b[1;33m dim=893 \t delta=1.005249 \t beta=265.53 \t guesses= 100 \u001b[0m \u001b[1;33m Proba success = 0.0810400903824654 \u001b[0m\n",
- "\u001b[0m [...50] \u001b[0m \u001b[1;33m dim=796 \t delta=1.005837 \t beta=224.98 \t guesses= 150 \u001b[0m \u001b[1;33m Proba success = 0.000109821681025081 \u001b[0m\n",
- "\u001b[0m [...50] \u001b[0m \u001b[1;33m dim=698 \t delta=1.006591 \t beta=184.91 \t guesses= 200 \u001b[0m \u001b[1;33m Proba success = 1.26373998816764e-7 \u001b[0m\n"
- ]
- }
- ],
- "source": [
- "max_guesses = 200\n",
- "\n",
- "if report_every is not None:\n",
- " logging(\" Hybrid attack estimation \", style=\"HEADER\")\n",
- "\n",
- "sorted_guesses = sorted(proba_best_guess_correct.items(),\n",
- " key=lambda kv: - kv[1])\n",
- "sorted_guesses = [sorted_guesses[i][0] for i in range(len(sorted_guesses))\n",
- " if sorted_guesses[i][1] != 1.]\n",
- "proba_success = 1.\n",
- "dbdd.verbosity = 0\n",
- "guesses = 0\n",
- "j = 0\n",
- "for i in ordered_indices(sorted_guesses, measured):\n",
- " j += 1\n",
- " if (guesses <= max_guesses):\n",
- " v = vec(Id[i])\n",
- " if dbdd.integrate_perfect_hint(v, _):\n",
- " guesses += 1\n",
- " proba_success *= proba_best_guess_correct[measured[i]]\n",
- " if report_every is not None and (j % report_every == 0):\n",
- " logging(\"[...%d]\" % report_every, newline=False)\n",
- " dbdd.integrate_q_vectors(q, indices=range(n, n + m))\n",
- " logging(\"dim=%3d \\t delta=%.6f \\t beta=%3.2f \\t guesses=%4d\" %\n",
- " (dbdd.dim(), dbdd.delta, dbdd.beta, guesses),\n",
- " style=\"VALUE\", newline=False)\n",
- " logging(\"Proba success = %s\" % proba_success, style=\"VALUE\",\n",
- " newline=True)\n",
- "\n",
- "# Estimate the attack\n",
- "(beta, _) = dbdd.estimate_attack()"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": 30,
- "metadata": {},
- "outputs": [
- {
- "data": {
- "text/plain": [
- "1.00661158312729"
- ]
- },
- "execution_count": 30,
- "metadata": {},
- "output_type": "execute_result"
- }
- ],
- "source": [
- "_"
- ]
- },
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": []
- }
- ],
- "metadata": {
- "kernelspec": {
- "display_name": "SageMath 9.0",
- "language": "sage",
- "name": "sagemath"
- },
- "language_info": {
- "codemirror_mode": {
- "name": "ipython",
- "version": 3
- },
- "file_extension": ".py",
- "mimetype": "text/x-python",
- "name": "python",
- "nbconvert_exporter": "python",
- "pygments_lexer": "ipython3",
- "version": "3.7.3"
- }
- },
- "nbformat": 4,
- "nbformat_minor": 2
- }
|