Error while Plotting Decision Boundary using Matplotlib












1















I recently wrote a Logistic regression model using Scikit Module. However, I'm having a REALLY HARD time plotting the decision boundary line. I'm explicitly multiplying the Coefficients and the Intercepts and plotting them (which in turn throws a wrong figure).



Could someone point me in the right direction on how to plot the decision boundary?



Is there an easier way to plot the line without having to manually multiply the coefficients and the intercepts?



Thanks a Million!



import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score

#Import Dataset
dataset = pd.read_csv("Students Exam Dataset.txt", names=["Exam 1", "Exam 2", "Admitted"])
print(dataset.head())

#Visualizing Dataset
positive = dataset[dataset["Admitted"] == 1]
negative = dataset[dataset["Admitted"] == 0]

plt.scatter(positive["Exam 1"], positive["Exam 2"], color="blue", marker="o", label="Admitted")
plt.scatter(negative["Exam 1"], negative["Exam 2"], color="red", marker="x", label="Not Admitted")
plt.title("Student Admission Plot")
plt.xlabel("Exam 1")
plt.ylabel("Exam 2")
plt.legend()
plt.plot()
plt.show()

#Preprocessing Data
col = len(dataset.columns)
x = dataset.iloc[:,0:col].values
y = dataset.iloc[:,col-1:col].values
print(f"X Shape: {x.shape} Y Shape: {y.shape}")

x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=1306)

#Initialize Model
reg = LogisticRegression()
reg.fit(x_train, y_train)

#Output
predictions = reg.predict(x_test)
accuracy = accuracy_score(y_test, predictions) * 100
coeff = reg.coef_
intercept = reg.intercept_
print(f"Accuracy Score : {accuracy} %")
print(f"Coefficients = {coeff}")
print(f"Intercept Coefficient = {intercept}")

#Visualizing Output
xx = np.linspace(30,100,100)
decision_boundary = (coeff[0,0] * xx + intercept.item()) / coeff[0,1]
plt.scatter(positive["Exam 1"], positive["Exam 2"], color="blue", marker="o", label="Admitted")
plt.scatter(negative["Exam 1"], negative["Exam 2"], color="red", marker="x", label="Not Admitted")
plt.plot(xx, decision_boundary, color="green", label="Decision Boundary")
plt.title("Student Admission Plot")
plt.xlabel("Exam 1")
plt.ylabel("Exam 2")
plt.legend()
plt.show()


Dataset: Student Dataset.txt










share|improve this question





























    1















    I recently wrote a Logistic regression model using Scikit Module. However, I'm having a REALLY HARD time plotting the decision boundary line. I'm explicitly multiplying the Coefficients and the Intercepts and plotting them (which in turn throws a wrong figure).



    Could someone point me in the right direction on how to plot the decision boundary?



    Is there an easier way to plot the line without having to manually multiply the coefficients and the intercepts?



    Thanks a Million!



    import pandas as pd
    import matplotlib.pyplot as plt
    import numpy as np
    from sklearn.linear_model import LogisticRegression
    from sklearn.model_selection import train_test_split
    from sklearn.metrics import accuracy_score

    #Import Dataset
    dataset = pd.read_csv("Students Exam Dataset.txt", names=["Exam 1", "Exam 2", "Admitted"])
    print(dataset.head())

    #Visualizing Dataset
    positive = dataset[dataset["Admitted"] == 1]
    negative = dataset[dataset["Admitted"] == 0]

    plt.scatter(positive["Exam 1"], positive["Exam 2"], color="blue", marker="o", label="Admitted")
    plt.scatter(negative["Exam 1"], negative["Exam 2"], color="red", marker="x", label="Not Admitted")
    plt.title("Student Admission Plot")
    plt.xlabel("Exam 1")
    plt.ylabel("Exam 2")
    plt.legend()
    plt.plot()
    plt.show()

    #Preprocessing Data
    col = len(dataset.columns)
    x = dataset.iloc[:,0:col].values
    y = dataset.iloc[:,col-1:col].values
    print(f"X Shape: {x.shape} Y Shape: {y.shape}")

    x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=1306)

    #Initialize Model
    reg = LogisticRegression()
    reg.fit(x_train, y_train)

    #Output
    predictions = reg.predict(x_test)
    accuracy = accuracy_score(y_test, predictions) * 100
    coeff = reg.coef_
    intercept = reg.intercept_
    print(f"Accuracy Score : {accuracy} %")
    print(f"Coefficients = {coeff}")
    print(f"Intercept Coefficient = {intercept}")

    #Visualizing Output
    xx = np.linspace(30,100,100)
    decision_boundary = (coeff[0,0] * xx + intercept.item()) / coeff[0,1]
    plt.scatter(positive["Exam 1"], positive["Exam 2"], color="blue", marker="o", label="Admitted")
    plt.scatter(negative["Exam 1"], negative["Exam 2"], color="red", marker="x", label="Not Admitted")
    plt.plot(xx, decision_boundary, color="green", label="Decision Boundary")
    plt.title("Student Admission Plot")
    plt.xlabel("Exam 1")
    plt.ylabel("Exam 2")
    plt.legend()
    plt.show()


    Dataset: Student Dataset.txt










    share|improve this question



























      1












      1








      1








      I recently wrote a Logistic regression model using Scikit Module. However, I'm having a REALLY HARD time plotting the decision boundary line. I'm explicitly multiplying the Coefficients and the Intercepts and plotting them (which in turn throws a wrong figure).



      Could someone point me in the right direction on how to plot the decision boundary?



      Is there an easier way to plot the line without having to manually multiply the coefficients and the intercepts?



      Thanks a Million!



      import pandas as pd
      import matplotlib.pyplot as plt
      import numpy as np
      from sklearn.linear_model import LogisticRegression
      from sklearn.model_selection import train_test_split
      from sklearn.metrics import accuracy_score

      #Import Dataset
      dataset = pd.read_csv("Students Exam Dataset.txt", names=["Exam 1", "Exam 2", "Admitted"])
      print(dataset.head())

      #Visualizing Dataset
      positive = dataset[dataset["Admitted"] == 1]
      negative = dataset[dataset["Admitted"] == 0]

      plt.scatter(positive["Exam 1"], positive["Exam 2"], color="blue", marker="o", label="Admitted")
      plt.scatter(negative["Exam 1"], negative["Exam 2"], color="red", marker="x", label="Not Admitted")
      plt.title("Student Admission Plot")
      plt.xlabel("Exam 1")
      plt.ylabel("Exam 2")
      plt.legend()
      plt.plot()
      plt.show()

      #Preprocessing Data
      col = len(dataset.columns)
      x = dataset.iloc[:,0:col].values
      y = dataset.iloc[:,col-1:col].values
      print(f"X Shape: {x.shape} Y Shape: {y.shape}")

      x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=1306)

      #Initialize Model
      reg = LogisticRegression()
      reg.fit(x_train, y_train)

      #Output
      predictions = reg.predict(x_test)
      accuracy = accuracy_score(y_test, predictions) * 100
      coeff = reg.coef_
      intercept = reg.intercept_
      print(f"Accuracy Score : {accuracy} %")
      print(f"Coefficients = {coeff}")
      print(f"Intercept Coefficient = {intercept}")

      #Visualizing Output
      xx = np.linspace(30,100,100)
      decision_boundary = (coeff[0,0] * xx + intercept.item()) / coeff[0,1]
      plt.scatter(positive["Exam 1"], positive["Exam 2"], color="blue", marker="o", label="Admitted")
      plt.scatter(negative["Exam 1"], negative["Exam 2"], color="red", marker="x", label="Not Admitted")
      plt.plot(xx, decision_boundary, color="green", label="Decision Boundary")
      plt.title("Student Admission Plot")
      plt.xlabel("Exam 1")
      plt.ylabel("Exam 2")
      plt.legend()
      plt.show()


      Dataset: Student Dataset.txt










      share|improve this question
















      I recently wrote a Logistic regression model using Scikit Module. However, I'm having a REALLY HARD time plotting the decision boundary line. I'm explicitly multiplying the Coefficients and the Intercepts and plotting them (which in turn throws a wrong figure).



      Could someone point me in the right direction on how to plot the decision boundary?



      Is there an easier way to plot the line without having to manually multiply the coefficients and the intercepts?



      Thanks a Million!



      import pandas as pd
      import matplotlib.pyplot as plt
      import numpy as np
      from sklearn.linear_model import LogisticRegression
      from sklearn.model_selection import train_test_split
      from sklearn.metrics import accuracy_score

      #Import Dataset
      dataset = pd.read_csv("Students Exam Dataset.txt", names=["Exam 1", "Exam 2", "Admitted"])
      print(dataset.head())

      #Visualizing Dataset
      positive = dataset[dataset["Admitted"] == 1]
      negative = dataset[dataset["Admitted"] == 0]

      plt.scatter(positive["Exam 1"], positive["Exam 2"], color="blue", marker="o", label="Admitted")
      plt.scatter(negative["Exam 1"], negative["Exam 2"], color="red", marker="x", label="Not Admitted")
      plt.title("Student Admission Plot")
      plt.xlabel("Exam 1")
      plt.ylabel("Exam 2")
      plt.legend()
      plt.plot()
      plt.show()

      #Preprocessing Data
      col = len(dataset.columns)
      x = dataset.iloc[:,0:col].values
      y = dataset.iloc[:,col-1:col].values
      print(f"X Shape: {x.shape} Y Shape: {y.shape}")

      x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=1306)

      #Initialize Model
      reg = LogisticRegression()
      reg.fit(x_train, y_train)

      #Output
      predictions = reg.predict(x_test)
      accuracy = accuracy_score(y_test, predictions) * 100
      coeff = reg.coef_
      intercept = reg.intercept_
      print(f"Accuracy Score : {accuracy} %")
      print(f"Coefficients = {coeff}")
      print(f"Intercept Coefficient = {intercept}")

      #Visualizing Output
      xx = np.linspace(30,100,100)
      decision_boundary = (coeff[0,0] * xx + intercept.item()) / coeff[0,1]
      plt.scatter(positive["Exam 1"], positive["Exam 2"], color="blue", marker="o", label="Admitted")
      plt.scatter(negative["Exam 1"], negative["Exam 2"], color="red", marker="x", label="Not Admitted")
      plt.plot(xx, decision_boundary, color="green", label="Decision Boundary")
      plt.title("Student Admission Plot")
      plt.xlabel("Exam 1")
      plt.ylabel("Exam 2")
      plt.legend()
      plt.show()


      Dataset: Student Dataset.txt







      python matplotlib scikit-learn






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 19 '18 at 17:33









      desertnaut

      17.1k63668




      17.1k63668










      asked Nov 19 '18 at 16:55









      Antony JohnAntony John

      337




      337
























          1 Answer
          1






          active

          oldest

          votes


















          0















          Is there an easier way to plot the line without having to manually multiply the coefficients and the intercepts?




          Yes, if you don't need to build this from scratch, there is an excellent implementation of plotting decision boundaries from scikit-learn classifiers in the mlxtend package. The documentation is extensive in the link provided and it's easy to install with pip install mlxtend.



          First, a couple points about the Preprocessing block of the code you posted:

          1. x should not include the class labels.

          2. y should be a 1d array.



          #Preprocessing Data
          col = len(dataset.columns)
          x = dataset.iloc[:,0:col-1].values # assumes your labels are always in the final column.
          y = dataset.iloc[:,col-1:col].values
          y = y.reshape(-1) # convert to 1d


          Now the plotting is as easy as:



          from mlxtend.plotting import plot_decision_regions
          plot_decision_regions(x, y,
          X_highlight=x_test,
          clf=reg,
          legend=2)


          This particular plot highlights x_test data points by encircling them.



          enter image description here






          share|improve this answer
























          • Thanks a Million! Just for personal info, can the same be achieved using matplotlib or scikit having the same order of simplicity?

            – Antony John
            Nov 20 '18 at 11:40











          • Examples from the scikit-learn docs: DecisionTree and SVM. Nice examples from a Kaggle kernel It's certainly more lines of code than using the mlxtend method, but up to you

            – Kevin
            Nov 20 '18 at 12:11











          • I've recently started to face this issue where I get an error saying "ValueError: Filler values must be provided when X has more than 2 training features." Any idea what is causing this?

            – Antony John
            Dec 2 '18 at 14:23






          • 1





            See Example 7 from the mlxtend link. When there are more than two features, it requires "filler" values for the features not being plotted. The links in my previous comment are more straightforward examples. You can always slice the columns from x that you want. For example the first and second feature: x[:, 0] and x[:, 1].

            – Kevin
            Dec 3 '18 at 12:43











          Your Answer






          StackExchange.ifUsing("editor", function () {
          StackExchange.using("externalEditor", function () {
          StackExchange.using("snippets", function () {
          StackExchange.snippets.init();
          });
          });
          }, "code-snippets");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "1"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53379333%2ferror-while-plotting-decision-boundary-using-matplotlib%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0















          Is there an easier way to plot the line without having to manually multiply the coefficients and the intercepts?




          Yes, if you don't need to build this from scratch, there is an excellent implementation of plotting decision boundaries from scikit-learn classifiers in the mlxtend package. The documentation is extensive in the link provided and it's easy to install with pip install mlxtend.



          First, a couple points about the Preprocessing block of the code you posted:

          1. x should not include the class labels.

          2. y should be a 1d array.



          #Preprocessing Data
          col = len(dataset.columns)
          x = dataset.iloc[:,0:col-1].values # assumes your labels are always in the final column.
          y = dataset.iloc[:,col-1:col].values
          y = y.reshape(-1) # convert to 1d


          Now the plotting is as easy as:



          from mlxtend.plotting import plot_decision_regions
          plot_decision_regions(x, y,
          X_highlight=x_test,
          clf=reg,
          legend=2)


          This particular plot highlights x_test data points by encircling them.



          enter image description here






          share|improve this answer
























          • Thanks a Million! Just for personal info, can the same be achieved using matplotlib or scikit having the same order of simplicity?

            – Antony John
            Nov 20 '18 at 11:40











          • Examples from the scikit-learn docs: DecisionTree and SVM. Nice examples from a Kaggle kernel It's certainly more lines of code than using the mlxtend method, but up to you

            – Kevin
            Nov 20 '18 at 12:11











          • I've recently started to face this issue where I get an error saying "ValueError: Filler values must be provided when X has more than 2 training features." Any idea what is causing this?

            – Antony John
            Dec 2 '18 at 14:23






          • 1





            See Example 7 from the mlxtend link. When there are more than two features, it requires "filler" values for the features not being plotted. The links in my previous comment are more straightforward examples. You can always slice the columns from x that you want. For example the first and second feature: x[:, 0] and x[:, 1].

            – Kevin
            Dec 3 '18 at 12:43
















          0















          Is there an easier way to plot the line without having to manually multiply the coefficients and the intercepts?




          Yes, if you don't need to build this from scratch, there is an excellent implementation of plotting decision boundaries from scikit-learn classifiers in the mlxtend package. The documentation is extensive in the link provided and it's easy to install with pip install mlxtend.



          First, a couple points about the Preprocessing block of the code you posted:

          1. x should not include the class labels.

          2. y should be a 1d array.



          #Preprocessing Data
          col = len(dataset.columns)
          x = dataset.iloc[:,0:col-1].values # assumes your labels are always in the final column.
          y = dataset.iloc[:,col-1:col].values
          y = y.reshape(-1) # convert to 1d


          Now the plotting is as easy as:



          from mlxtend.plotting import plot_decision_regions
          plot_decision_regions(x, y,
          X_highlight=x_test,
          clf=reg,
          legend=2)


          This particular plot highlights x_test data points by encircling them.



          enter image description here






          share|improve this answer
























          • Thanks a Million! Just for personal info, can the same be achieved using matplotlib or scikit having the same order of simplicity?

            – Antony John
            Nov 20 '18 at 11:40











          • Examples from the scikit-learn docs: DecisionTree and SVM. Nice examples from a Kaggle kernel It's certainly more lines of code than using the mlxtend method, but up to you

            – Kevin
            Nov 20 '18 at 12:11











          • I've recently started to face this issue where I get an error saying "ValueError: Filler values must be provided when X has more than 2 training features." Any idea what is causing this?

            – Antony John
            Dec 2 '18 at 14:23






          • 1





            See Example 7 from the mlxtend link. When there are more than two features, it requires "filler" values for the features not being plotted. The links in my previous comment are more straightforward examples. You can always slice the columns from x that you want. For example the first and second feature: x[:, 0] and x[:, 1].

            – Kevin
            Dec 3 '18 at 12:43














          0












          0








          0








          Is there an easier way to plot the line without having to manually multiply the coefficients and the intercepts?




          Yes, if you don't need to build this from scratch, there is an excellent implementation of plotting decision boundaries from scikit-learn classifiers in the mlxtend package. The documentation is extensive in the link provided and it's easy to install with pip install mlxtend.



          First, a couple points about the Preprocessing block of the code you posted:

          1. x should not include the class labels.

          2. y should be a 1d array.



          #Preprocessing Data
          col = len(dataset.columns)
          x = dataset.iloc[:,0:col-1].values # assumes your labels are always in the final column.
          y = dataset.iloc[:,col-1:col].values
          y = y.reshape(-1) # convert to 1d


          Now the plotting is as easy as:



          from mlxtend.plotting import plot_decision_regions
          plot_decision_regions(x, y,
          X_highlight=x_test,
          clf=reg,
          legend=2)


          This particular plot highlights x_test data points by encircling them.



          enter image description here






          share|improve this answer














          Is there an easier way to plot the line without having to manually multiply the coefficients and the intercepts?




          Yes, if you don't need to build this from scratch, there is an excellent implementation of plotting decision boundaries from scikit-learn classifiers in the mlxtend package. The documentation is extensive in the link provided and it's easy to install with pip install mlxtend.



          First, a couple points about the Preprocessing block of the code you posted:

          1. x should not include the class labels.

          2. y should be a 1d array.



          #Preprocessing Data
          col = len(dataset.columns)
          x = dataset.iloc[:,0:col-1].values # assumes your labels are always in the final column.
          y = dataset.iloc[:,col-1:col].values
          y = y.reshape(-1) # convert to 1d


          Now the plotting is as easy as:



          from mlxtend.plotting import plot_decision_regions
          plot_decision_regions(x, y,
          X_highlight=x_test,
          clf=reg,
          legend=2)


          This particular plot highlights x_test data points by encircling them.



          enter image description here







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 20 '18 at 2:43









          KevinKevin

          3,78631636




          3,78631636













          • Thanks a Million! Just for personal info, can the same be achieved using matplotlib or scikit having the same order of simplicity?

            – Antony John
            Nov 20 '18 at 11:40











          • Examples from the scikit-learn docs: DecisionTree and SVM. Nice examples from a Kaggle kernel It's certainly more lines of code than using the mlxtend method, but up to you

            – Kevin
            Nov 20 '18 at 12:11











          • I've recently started to face this issue where I get an error saying "ValueError: Filler values must be provided when X has more than 2 training features." Any idea what is causing this?

            – Antony John
            Dec 2 '18 at 14:23






          • 1





            See Example 7 from the mlxtend link. When there are more than two features, it requires "filler" values for the features not being plotted. The links in my previous comment are more straightforward examples. You can always slice the columns from x that you want. For example the first and second feature: x[:, 0] and x[:, 1].

            – Kevin
            Dec 3 '18 at 12:43



















          • Thanks a Million! Just for personal info, can the same be achieved using matplotlib or scikit having the same order of simplicity?

            – Antony John
            Nov 20 '18 at 11:40











          • Examples from the scikit-learn docs: DecisionTree and SVM. Nice examples from a Kaggle kernel It's certainly more lines of code than using the mlxtend method, but up to you

            – Kevin
            Nov 20 '18 at 12:11











          • I've recently started to face this issue where I get an error saying "ValueError: Filler values must be provided when X has more than 2 training features." Any idea what is causing this?

            – Antony John
            Dec 2 '18 at 14:23






          • 1





            See Example 7 from the mlxtend link. When there are more than two features, it requires "filler" values for the features not being plotted. The links in my previous comment are more straightforward examples. You can always slice the columns from x that you want. For example the first and second feature: x[:, 0] and x[:, 1].

            – Kevin
            Dec 3 '18 at 12:43

















          Thanks a Million! Just for personal info, can the same be achieved using matplotlib or scikit having the same order of simplicity?

          – Antony John
          Nov 20 '18 at 11:40





          Thanks a Million! Just for personal info, can the same be achieved using matplotlib or scikit having the same order of simplicity?

          – Antony John
          Nov 20 '18 at 11:40













          Examples from the scikit-learn docs: DecisionTree and SVM. Nice examples from a Kaggle kernel It's certainly more lines of code than using the mlxtend method, but up to you

          – Kevin
          Nov 20 '18 at 12:11





          Examples from the scikit-learn docs: DecisionTree and SVM. Nice examples from a Kaggle kernel It's certainly more lines of code than using the mlxtend method, but up to you

          – Kevin
          Nov 20 '18 at 12:11













          I've recently started to face this issue where I get an error saying "ValueError: Filler values must be provided when X has more than 2 training features." Any idea what is causing this?

          – Antony John
          Dec 2 '18 at 14:23





          I've recently started to face this issue where I get an error saying "ValueError: Filler values must be provided when X has more than 2 training features." Any idea what is causing this?

          – Antony John
          Dec 2 '18 at 14:23




          1




          1





          See Example 7 from the mlxtend link. When there are more than two features, it requires "filler" values for the features not being plotted. The links in my previous comment are more straightforward examples. You can always slice the columns from x that you want. For example the first and second feature: x[:, 0] and x[:, 1].

          – Kevin
          Dec 3 '18 at 12:43





          See Example 7 from the mlxtend link. When there are more than two features, it requires "filler" values for the features not being plotted. The links in my previous comment are more straightforward examples. You can always slice the columns from x that you want. For example the first and second feature: x[:, 0] and x[:, 1].

          – Kevin
          Dec 3 '18 at 12:43


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53379333%2ferror-while-plotting-decision-boundary-using-matplotlib%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Biblatex bibliography style without URLs when DOI exists (in Overleaf with Zotero bibliography)

          ComboBox Display Member on multiple fields

          Is it possible to collect Nectar points via Trainline?