Max Cut: Form of Graph Laplacian?












0














In my convex optimization notes, it defines the max cut problem as
$$max_{xinBbb{R}^n} hspace{.1 in} x^TL_Gxhspace{.5 in}
text{subject to} x_iin {-1,1}, i=1,cdots,n$$



where $L_G$ is a matrix called the Laplacian of the graph $G$.



In reality, we are maximizing the expression
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(x_i-x_j)^2
propto
dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)
,hspace{.5 in}xin {-1,1}^n.$$

Can someone explain/derive how the two expressions are equal?
ie the form of $L_G$ such that
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(x_i-x_j)^2=x^TL_Gx$$
or such that
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)=x^TL_Gx$$
because clearly $x^TAx=sum_{ij}A_{ij}x_ix_j$, but that's not the form we have above.



From the second form, I see that we almost get there:
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)=
dfrac{1}{2}sum_{i,jin V}w_{ij}
-dfrac{1}{2}sum_{i,jin V}w_{ij}x_ix_j
=dfrac{1}{2}sum_{i,jin V}w_{ij}
-dfrac{1}{2}x^TWx
$$

but the first term confuses me.










share|cite|improve this question






















  • Have a look to {csustan.csustan.edu/~tom/Clustering/GraphLaplacian-tutorial.pdf}
    – Jean Marie
    Nov 19 at 23:39
















0














In my convex optimization notes, it defines the max cut problem as
$$max_{xinBbb{R}^n} hspace{.1 in} x^TL_Gxhspace{.5 in}
text{subject to} x_iin {-1,1}, i=1,cdots,n$$



where $L_G$ is a matrix called the Laplacian of the graph $G$.



In reality, we are maximizing the expression
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(x_i-x_j)^2
propto
dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)
,hspace{.5 in}xin {-1,1}^n.$$

Can someone explain/derive how the two expressions are equal?
ie the form of $L_G$ such that
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(x_i-x_j)^2=x^TL_Gx$$
or such that
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)=x^TL_Gx$$
because clearly $x^TAx=sum_{ij}A_{ij}x_ix_j$, but that's not the form we have above.



From the second form, I see that we almost get there:
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)=
dfrac{1}{2}sum_{i,jin V}w_{ij}
-dfrac{1}{2}sum_{i,jin V}w_{ij}x_ix_j
=dfrac{1}{2}sum_{i,jin V}w_{ij}
-dfrac{1}{2}x^TWx
$$

but the first term confuses me.










share|cite|improve this question






















  • Have a look to {csustan.csustan.edu/~tom/Clustering/GraphLaplacian-tutorial.pdf}
    – Jean Marie
    Nov 19 at 23:39














0












0








0







In my convex optimization notes, it defines the max cut problem as
$$max_{xinBbb{R}^n} hspace{.1 in} x^TL_Gxhspace{.5 in}
text{subject to} x_iin {-1,1}, i=1,cdots,n$$



where $L_G$ is a matrix called the Laplacian of the graph $G$.



In reality, we are maximizing the expression
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(x_i-x_j)^2
propto
dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)
,hspace{.5 in}xin {-1,1}^n.$$

Can someone explain/derive how the two expressions are equal?
ie the form of $L_G$ such that
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(x_i-x_j)^2=x^TL_Gx$$
or such that
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)=x^TL_Gx$$
because clearly $x^TAx=sum_{ij}A_{ij}x_ix_j$, but that's not the form we have above.



From the second form, I see that we almost get there:
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)=
dfrac{1}{2}sum_{i,jin V}w_{ij}
-dfrac{1}{2}sum_{i,jin V}w_{ij}x_ix_j
=dfrac{1}{2}sum_{i,jin V}w_{ij}
-dfrac{1}{2}x^TWx
$$

but the first term confuses me.










share|cite|improve this question













In my convex optimization notes, it defines the max cut problem as
$$max_{xinBbb{R}^n} hspace{.1 in} x^TL_Gxhspace{.5 in}
text{subject to} x_iin {-1,1}, i=1,cdots,n$$



where $L_G$ is a matrix called the Laplacian of the graph $G$.



In reality, we are maximizing the expression
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(x_i-x_j)^2
propto
dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)
,hspace{.5 in}xin {-1,1}^n.$$

Can someone explain/derive how the two expressions are equal?
ie the form of $L_G$ such that
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(x_i-x_j)^2=x^TL_Gx$$
or such that
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)=x^TL_Gx$$
because clearly $x^TAx=sum_{ij}A_{ij}x_ix_j$, but that's not the form we have above.



From the second form, I see that we almost get there:
$$dfrac{1}{2}sum_{i,jin V}w_{ij}(1-x_ix_j)=
dfrac{1}{2}sum_{i,jin V}w_{ij}
-dfrac{1}{2}sum_{i,jin V}w_{ij}x_ix_j
=dfrac{1}{2}sum_{i,jin V}w_{ij}
-dfrac{1}{2}x^TWx
$$

but the first term confuses me.







convex-optimization






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Nov 19 at 23:24









Dan

276




276












  • Have a look to {csustan.csustan.edu/~tom/Clustering/GraphLaplacian-tutorial.pdf}
    – Jean Marie
    Nov 19 at 23:39


















  • Have a look to {csustan.csustan.edu/~tom/Clustering/GraphLaplacian-tutorial.pdf}
    – Jean Marie
    Nov 19 at 23:39
















Have a look to {csustan.csustan.edu/~tom/Clustering/GraphLaplacian-tutorial.pdf}
– Jean Marie
Nov 19 at 23:39




Have a look to {csustan.csustan.edu/~tom/Clustering/GraphLaplacian-tutorial.pdf}
– Jean Marie
Nov 19 at 23:39










1 Answer
1






active

oldest

votes


















0














I seem to have figured a derivation out to the point where I am satisfied. If someone posts a better solution, I will mark it as "best answer." Here is my solution:



The elements of the (simple) graph Laplacian are given by (from Wikipedia):
$$
L_{ij}:=
begin{cases}
text{deg}(v_i),& text{if } i=j\
-1, & text{if }isim j\
0, & text{otherwise}
end{cases}
$$

So an example graph Laplacian might look like:
$$
L_{text{example}}=begin{bmatrix}
2&-1&-1&0 \
-1&3&-1&-1\
-1&-1&2&0\
0&-1&0&1
end{bmatrix}
$$

Notice how each row sums to zero because the diagonal element is the number of connected vertices and the off-diagonal elements subtract $1$ for every connected vertex. The exact same reason is why each column sums to zero (ie the matrix is symmetric).



Now let $xin {-1,1}^n$, where $x_i$ represents whether vertex $i$ is on one side of the cut or the other. One example could be:
$$
x_{text{example}}=begin{bmatrix}
1\
-1\
-1\
1
end{bmatrix}
$$

so computing $L_{text{example}}x_{text{example}}$ would return a column vector. Each $i$th element in this column vector would be calculated by taking the degree of vertex $i$, adding $1$ for each connected vertex on the other side of the cut, and subtracting $1$ for each connected vertex on the same side of the cut, then arbitrarily multiplying by $-1$ if it's on a specific side of the cut. This arbitrary multiplication doesn't matter though, because the purpose of computing $x_{text{example}}^TL_{text{example}}x_{text{example}}$ is to cancel out these minus signs. For the example above,
$$
x_{text{example}}^TL_{text{example}}x_{text{example}}=
begin{bmatrix}
1&
-1&
-1&
1
end{bmatrix}
begin{bmatrix}
4\
-4\
-2\
2
end{bmatrix}
=12
$$



Thus, it's easy to see that element $i$ in $Lx$ gives (up to $-1$): $$
(Lx)_i=
text{deg}(v_i)+Bigg(sum_{
substack{jsim i,\
jtext{ other side}}
}1Bigg)
-Bigg(sum_{substack{jsim i,\
jtext{ same side}}
}1Bigg)$$

We also see that $x^TLx$ gives the sum of these:
$$
begin{align}
x^TLx&=sum_{iin V}text{deg}(v_i)+2(text{# edges crossing cut})-2(text{# edges not crossing cut})\
&=2(text{# edges}+text{# edges crossing cut}-text{# edges not crossing cut})\
&=4(text{# edges crossing cut})
end{align}$$

because
$$
text{# edges}=text{# edges crossing cut}+text{# edges not crossing cut}.
$$

Thus, this representation with $L$ (specifically $x^TLx$) is useful in convex optimization/max cut because it is optimizing something proportional to the number of edges crossing the cut.



Clearly this is the result for an unweighted graph Laplacian. The generalization to a graph with weighted edges is simple and left as an exercise for the reader.






share|cite|improve this answer





















    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3005676%2fmax-cut-form-of-graph-laplacian%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0














    I seem to have figured a derivation out to the point where I am satisfied. If someone posts a better solution, I will mark it as "best answer." Here is my solution:



    The elements of the (simple) graph Laplacian are given by (from Wikipedia):
    $$
    L_{ij}:=
    begin{cases}
    text{deg}(v_i),& text{if } i=j\
    -1, & text{if }isim j\
    0, & text{otherwise}
    end{cases}
    $$

    So an example graph Laplacian might look like:
    $$
    L_{text{example}}=begin{bmatrix}
    2&-1&-1&0 \
    -1&3&-1&-1\
    -1&-1&2&0\
    0&-1&0&1
    end{bmatrix}
    $$

    Notice how each row sums to zero because the diagonal element is the number of connected vertices and the off-diagonal elements subtract $1$ for every connected vertex. The exact same reason is why each column sums to zero (ie the matrix is symmetric).



    Now let $xin {-1,1}^n$, where $x_i$ represents whether vertex $i$ is on one side of the cut or the other. One example could be:
    $$
    x_{text{example}}=begin{bmatrix}
    1\
    -1\
    -1\
    1
    end{bmatrix}
    $$

    so computing $L_{text{example}}x_{text{example}}$ would return a column vector. Each $i$th element in this column vector would be calculated by taking the degree of vertex $i$, adding $1$ for each connected vertex on the other side of the cut, and subtracting $1$ for each connected vertex on the same side of the cut, then arbitrarily multiplying by $-1$ if it's on a specific side of the cut. This arbitrary multiplication doesn't matter though, because the purpose of computing $x_{text{example}}^TL_{text{example}}x_{text{example}}$ is to cancel out these minus signs. For the example above,
    $$
    x_{text{example}}^TL_{text{example}}x_{text{example}}=
    begin{bmatrix}
    1&
    -1&
    -1&
    1
    end{bmatrix}
    begin{bmatrix}
    4\
    -4\
    -2\
    2
    end{bmatrix}
    =12
    $$



    Thus, it's easy to see that element $i$ in $Lx$ gives (up to $-1$): $$
    (Lx)_i=
    text{deg}(v_i)+Bigg(sum_{
    substack{jsim i,\
    jtext{ other side}}
    }1Bigg)
    -Bigg(sum_{substack{jsim i,\
    jtext{ same side}}
    }1Bigg)$$

    We also see that $x^TLx$ gives the sum of these:
    $$
    begin{align}
    x^TLx&=sum_{iin V}text{deg}(v_i)+2(text{# edges crossing cut})-2(text{# edges not crossing cut})\
    &=2(text{# edges}+text{# edges crossing cut}-text{# edges not crossing cut})\
    &=4(text{# edges crossing cut})
    end{align}$$

    because
    $$
    text{# edges}=text{# edges crossing cut}+text{# edges not crossing cut}.
    $$

    Thus, this representation with $L$ (specifically $x^TLx$) is useful in convex optimization/max cut because it is optimizing something proportional to the number of edges crossing the cut.



    Clearly this is the result for an unweighted graph Laplacian. The generalization to a graph with weighted edges is simple and left as an exercise for the reader.






    share|cite|improve this answer


























      0














      I seem to have figured a derivation out to the point where I am satisfied. If someone posts a better solution, I will mark it as "best answer." Here is my solution:



      The elements of the (simple) graph Laplacian are given by (from Wikipedia):
      $$
      L_{ij}:=
      begin{cases}
      text{deg}(v_i),& text{if } i=j\
      -1, & text{if }isim j\
      0, & text{otherwise}
      end{cases}
      $$

      So an example graph Laplacian might look like:
      $$
      L_{text{example}}=begin{bmatrix}
      2&-1&-1&0 \
      -1&3&-1&-1\
      -1&-1&2&0\
      0&-1&0&1
      end{bmatrix}
      $$

      Notice how each row sums to zero because the diagonal element is the number of connected vertices and the off-diagonal elements subtract $1$ for every connected vertex. The exact same reason is why each column sums to zero (ie the matrix is symmetric).



      Now let $xin {-1,1}^n$, where $x_i$ represents whether vertex $i$ is on one side of the cut or the other. One example could be:
      $$
      x_{text{example}}=begin{bmatrix}
      1\
      -1\
      -1\
      1
      end{bmatrix}
      $$

      so computing $L_{text{example}}x_{text{example}}$ would return a column vector. Each $i$th element in this column vector would be calculated by taking the degree of vertex $i$, adding $1$ for each connected vertex on the other side of the cut, and subtracting $1$ for each connected vertex on the same side of the cut, then arbitrarily multiplying by $-1$ if it's on a specific side of the cut. This arbitrary multiplication doesn't matter though, because the purpose of computing $x_{text{example}}^TL_{text{example}}x_{text{example}}$ is to cancel out these minus signs. For the example above,
      $$
      x_{text{example}}^TL_{text{example}}x_{text{example}}=
      begin{bmatrix}
      1&
      -1&
      -1&
      1
      end{bmatrix}
      begin{bmatrix}
      4\
      -4\
      -2\
      2
      end{bmatrix}
      =12
      $$



      Thus, it's easy to see that element $i$ in $Lx$ gives (up to $-1$): $$
      (Lx)_i=
      text{deg}(v_i)+Bigg(sum_{
      substack{jsim i,\
      jtext{ other side}}
      }1Bigg)
      -Bigg(sum_{substack{jsim i,\
      jtext{ same side}}
      }1Bigg)$$

      We also see that $x^TLx$ gives the sum of these:
      $$
      begin{align}
      x^TLx&=sum_{iin V}text{deg}(v_i)+2(text{# edges crossing cut})-2(text{# edges not crossing cut})\
      &=2(text{# edges}+text{# edges crossing cut}-text{# edges not crossing cut})\
      &=4(text{# edges crossing cut})
      end{align}$$

      because
      $$
      text{# edges}=text{# edges crossing cut}+text{# edges not crossing cut}.
      $$

      Thus, this representation with $L$ (specifically $x^TLx$) is useful in convex optimization/max cut because it is optimizing something proportional to the number of edges crossing the cut.



      Clearly this is the result for an unweighted graph Laplacian. The generalization to a graph with weighted edges is simple and left as an exercise for the reader.






      share|cite|improve this answer
























        0












        0








        0






        I seem to have figured a derivation out to the point where I am satisfied. If someone posts a better solution, I will mark it as "best answer." Here is my solution:



        The elements of the (simple) graph Laplacian are given by (from Wikipedia):
        $$
        L_{ij}:=
        begin{cases}
        text{deg}(v_i),& text{if } i=j\
        -1, & text{if }isim j\
        0, & text{otherwise}
        end{cases}
        $$

        So an example graph Laplacian might look like:
        $$
        L_{text{example}}=begin{bmatrix}
        2&-1&-1&0 \
        -1&3&-1&-1\
        -1&-1&2&0\
        0&-1&0&1
        end{bmatrix}
        $$

        Notice how each row sums to zero because the diagonal element is the number of connected vertices and the off-diagonal elements subtract $1$ for every connected vertex. The exact same reason is why each column sums to zero (ie the matrix is symmetric).



        Now let $xin {-1,1}^n$, where $x_i$ represents whether vertex $i$ is on one side of the cut or the other. One example could be:
        $$
        x_{text{example}}=begin{bmatrix}
        1\
        -1\
        -1\
        1
        end{bmatrix}
        $$

        so computing $L_{text{example}}x_{text{example}}$ would return a column vector. Each $i$th element in this column vector would be calculated by taking the degree of vertex $i$, adding $1$ for each connected vertex on the other side of the cut, and subtracting $1$ for each connected vertex on the same side of the cut, then arbitrarily multiplying by $-1$ if it's on a specific side of the cut. This arbitrary multiplication doesn't matter though, because the purpose of computing $x_{text{example}}^TL_{text{example}}x_{text{example}}$ is to cancel out these minus signs. For the example above,
        $$
        x_{text{example}}^TL_{text{example}}x_{text{example}}=
        begin{bmatrix}
        1&
        -1&
        -1&
        1
        end{bmatrix}
        begin{bmatrix}
        4\
        -4\
        -2\
        2
        end{bmatrix}
        =12
        $$



        Thus, it's easy to see that element $i$ in $Lx$ gives (up to $-1$): $$
        (Lx)_i=
        text{deg}(v_i)+Bigg(sum_{
        substack{jsim i,\
        jtext{ other side}}
        }1Bigg)
        -Bigg(sum_{substack{jsim i,\
        jtext{ same side}}
        }1Bigg)$$

        We also see that $x^TLx$ gives the sum of these:
        $$
        begin{align}
        x^TLx&=sum_{iin V}text{deg}(v_i)+2(text{# edges crossing cut})-2(text{# edges not crossing cut})\
        &=2(text{# edges}+text{# edges crossing cut}-text{# edges not crossing cut})\
        &=4(text{# edges crossing cut})
        end{align}$$

        because
        $$
        text{# edges}=text{# edges crossing cut}+text{# edges not crossing cut}.
        $$

        Thus, this representation with $L$ (specifically $x^TLx$) is useful in convex optimization/max cut because it is optimizing something proportional to the number of edges crossing the cut.



        Clearly this is the result for an unweighted graph Laplacian. The generalization to a graph with weighted edges is simple and left as an exercise for the reader.






        share|cite|improve this answer












        I seem to have figured a derivation out to the point where I am satisfied. If someone posts a better solution, I will mark it as "best answer." Here is my solution:



        The elements of the (simple) graph Laplacian are given by (from Wikipedia):
        $$
        L_{ij}:=
        begin{cases}
        text{deg}(v_i),& text{if } i=j\
        -1, & text{if }isim j\
        0, & text{otherwise}
        end{cases}
        $$

        So an example graph Laplacian might look like:
        $$
        L_{text{example}}=begin{bmatrix}
        2&-1&-1&0 \
        -1&3&-1&-1\
        -1&-1&2&0\
        0&-1&0&1
        end{bmatrix}
        $$

        Notice how each row sums to zero because the diagonal element is the number of connected vertices and the off-diagonal elements subtract $1$ for every connected vertex. The exact same reason is why each column sums to zero (ie the matrix is symmetric).



        Now let $xin {-1,1}^n$, where $x_i$ represents whether vertex $i$ is on one side of the cut or the other. One example could be:
        $$
        x_{text{example}}=begin{bmatrix}
        1\
        -1\
        -1\
        1
        end{bmatrix}
        $$

        so computing $L_{text{example}}x_{text{example}}$ would return a column vector. Each $i$th element in this column vector would be calculated by taking the degree of vertex $i$, adding $1$ for each connected vertex on the other side of the cut, and subtracting $1$ for each connected vertex on the same side of the cut, then arbitrarily multiplying by $-1$ if it's on a specific side of the cut. This arbitrary multiplication doesn't matter though, because the purpose of computing $x_{text{example}}^TL_{text{example}}x_{text{example}}$ is to cancel out these minus signs. For the example above,
        $$
        x_{text{example}}^TL_{text{example}}x_{text{example}}=
        begin{bmatrix}
        1&
        -1&
        -1&
        1
        end{bmatrix}
        begin{bmatrix}
        4\
        -4\
        -2\
        2
        end{bmatrix}
        =12
        $$



        Thus, it's easy to see that element $i$ in $Lx$ gives (up to $-1$): $$
        (Lx)_i=
        text{deg}(v_i)+Bigg(sum_{
        substack{jsim i,\
        jtext{ other side}}
        }1Bigg)
        -Bigg(sum_{substack{jsim i,\
        jtext{ same side}}
        }1Bigg)$$

        We also see that $x^TLx$ gives the sum of these:
        $$
        begin{align}
        x^TLx&=sum_{iin V}text{deg}(v_i)+2(text{# edges crossing cut})-2(text{# edges not crossing cut})\
        &=2(text{# edges}+text{# edges crossing cut}-text{# edges not crossing cut})\
        &=4(text{# edges crossing cut})
        end{align}$$

        because
        $$
        text{# edges}=text{# edges crossing cut}+text{# edges not crossing cut}.
        $$

        Thus, this representation with $L$ (specifically $x^TLx$) is useful in convex optimization/max cut because it is optimizing something proportional to the number of edges crossing the cut.



        Clearly this is the result for an unweighted graph Laplacian. The generalization to a graph with weighted edges is simple and left as an exercise for the reader.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Nov 20 at 18:46









        Dan

        276




        276






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3005676%2fmax-cut-form-of-graph-laplacian%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            mysqli_query(): Empty query in /home/lucindabrummitt/public_html/blog/wp-includes/wp-db.php on line 1924

            How to change which sound is reproduced for terminal bell?

            Can I use Tabulator js library in my java Spring + Thymeleaf project?