Geschreven door studenten die geslaagd zijn Direct beschikbaar na je betaling Online lezen of als PDF Verkeerd document? Gratis ruilen 4,6 TrustPilot
logo-home
Overig

ISYE 7406 Homework 1 | Actual verified Study complete Solutions | A+ Graded | 2026 Updates | 100% correct

Beoordeling
-
Verkocht
-
Pagina's
8
Geüpload op
24-04-2026
Geschreven in
2025/2026

ISYE 7406 Homework 1 | Actual verified Study complete Solutions | A+ Graded | 2026 Updates | 100% correct

Instelling
Vak

Voorbeeld van de inhoud

ISYE 7406 Homework 1
1. Introduction
This homework examines fundamental classification methods applied to a real-world, high-dimensional
dataset and evaluates their performance using training, testing, and Monte Carlo cross-validation. The
objective is to distinguish between handwritten digits 2 and 7 from the well-known ZIP code dataset, a
classic benchmark in pattern recognition and image classification. Each observation consists of a 16×16
grayscale image, highlighting common challenges in supervised learning such as high dimensionality, noise,
and model complexity.

Two families of classifiers are examined: linear regression used as a classifier and the k-nearest neighbors
(KNN) method with multiple choices of k. By comparing these approaches under different error estimation
schemes, the analysis illustrates how model flexibility and tuning parameters influence predictive
accuracy, overfitting, and robustness.

2. Exploratory data analysis
The training dataset, denoted as ziptrain27, contains 1,376 observations and 257 variables. The first
column represents the digit label (2 or 7), while the remaining 256 columns correspond to pixel intensities
of a 16×16 grayscale image, with values bounded in the interval [−1,1].

Among the training observations, 731 correspond to digit 2 and 645 to digit 7, indicating a reasonably
balanced class distribution. Because the response is balanced and misclassification costs are assumed to
be symmetric, overall classification error is an appropriate performance metric, and no class weighting or
resampling is required.

Each observation can be reshaped into a 16×16 matrix and visualized as a grayscale image. Visual
inspection of several examples (for instance, row 5, which clearly represents digit 7) confirms that digits
2 and 7 exhibit distinct structural patterns. This observation supports the feasibility of accurate
classification despite the high dimensionality of the feature space. Since all pixel features are measured
on a comparable numeric scale within [−1,1], no additional standardization was performed. Shown below
are visual representations of the digit 7 using two different grayscale parameter ranges (left: 0–1; right:
0–32/32). These representations demonstrate how grayscale scaling affects the sharpness and clarity of
the digit.




-1-

, 3. Methods
3.1 Linear regression model classifier
A linear regression model was fitted using the training dataset ziptrain27, with the digit label Y∈{2,7} as
the response and the 256 pixel intensities as predictors. Predicted values of Y were converted to class
labels by applying a threshold of 4.5, the midpoint between 2 and 7: observations with Ŷ < 4.5 were
classified as digit 2, and those with Ŷ ≥ 4.5 were classified as digit 7.

This approach treats linear regression as a simple parametric classifier, inducing a linear decision boundary
in the 256-dimensional feature space. Although it does not capture potential nonlinear structure in the
data, the model provides a useful baseline with relatively low variance compared to more flexible,
nonparametric methods.

3.2 K-nearest neighbors (KNN)
The k-nearest neighbors (KNN) classifier was implemented using Euclidean distance in the original pixel
feature space. Eight values of the tuning parameter were considered: k∈{1,3,5,7,9,11,13,15}.

For each test observation, the classifier identifies the k nearest training samples and assigns the class label
by majority vote among their corresponding labels. Smaller values of k produce highly flexible decision
rules with low bias but high variance, whereas larger values of k yield smoother classifiers characterized
by higher bias and lower variance.

3.3 Error estimation procedures
Model performance was evaluated using three complementary approaches.
• Training error
For each method, the empirical misclassification rate was computed on the original training
dataset ziptrain27. While this measure reflects goodness-of-fit, it typically underestimates the
true predictive error.
• Testing error on an independent test set
An independent testing dataset, ziptest27, was constructed by subsetting the original ZIP test data
to digits 2 and 7. Misclassification rates were computed on ziptest27 using models trained
exclusively on ziptrain27.
• Monte Carlo cross-validation (MC-CV)
The full dataset of digits 2 and 7 was formed by combining ziptrain27 and ziptest27 into zip27full,
yielding a total sample size of 1,721. For each of B = 100 runs, 1,376 observations were randomly
selected as a temporary training set, with the remaining 345 used as a temporary test set. All nine
models (linear regression and eight KNN variants) were trained on the temporary training sets
and evaluated on the corresponding test sets. Average cross-validation testing errors and their
sample variances were then computed across the B runs for each method.

An alternative Monte Carlo cross-validation (MC-CV) scheme, also implemented in the provided code,
resamples digits 2 and 7 separately to preserve class balance in each split and records both average
training and testing errors across runs. This approach leads to qualitatively similar conclusions; therefore,
detailed numerical results are omitted for brevity.

4. Results
4.1 Training and testing errors
Table 1 summarizes the training errors (on ziptrain27) and the testing errors (on ziptest27) for all nine
models.


-2-

Geschreven voor

Instelling
Vak

Documentinformatie

Geüpload op
24 april 2026
Aantal pagina's
8
Geschreven in
2025/2026
Type
OVERIG
Persoon
Onbekend

Onderwerpen

$15.49
Krijg toegang tot het volledige document:

Verkeerd document? Gratis ruilen Binnen 14 dagen na aankoop en voor het downloaden kun je een ander document kiezen. Je kunt het bedrag gewoon opnieuw besteden.
Geschreven door studenten die geslaagd zijn
Direct beschikbaar na je betaling
Online lezen of als PDF


Ook beschikbaar in voordeelbundel

Maak kennis met de verkoper

Seller avatar
De reputatie van een verkoper is gebaseerd op het aantal documenten dat iemand tegen betaling verkocht heeft en de beoordelingen die voor die items ontvangen zijn. Er zijn drie niveau’s te onderscheiden: brons, zilver en goud. Hoe beter de reputatie, hoe meer de kwaliteit van zijn of haar werk te vertrouwen is.
EduSprint Chamberlain College Of Nursing
Volgen Je moet ingelogd zijn om studenten of vakken te kunnen volgen
Verkocht
50
Lid sinds
2 jaar
Aantal volgers
5
Documenten
6810
Laatst verkocht
1 week geleden
Elite Nursing Exams Hub

WGU A+ Vault fore more info

4.3

6 beoordelingen

5
4
4
0
3
2
2
0
1
0

Recent door jou bekeken

Waarom studenten kiezen voor Stuvia

Gemaakt door medestudenten, geverifieerd door reviews

Kwaliteit die je kunt vertrouwen: geschreven door studenten die slaagden en beoordeeld door anderen die dit document gebruikten.

Niet tevreden? Kies een ander document

Geen zorgen! Je kunt voor hetzelfde geld direct een ander document kiezen dat beter past bij wat je zoekt.

Betaal zoals je wilt, start meteen met leren

Geen abonnement, geen verplichtingen. Betaal zoals je gewend bent via iDeal of creditcard en download je PDF-document meteen.

Student with book image

“Gekocht, gedownload en geslaagd. Zo makkelijk kan het dus zijn.”

Alisha Student

Bezig met je bronvermelding?

Maak nauwkeurige citaten in APA, MLA en Harvard met onze gratis bronnengenerator.

Bezig met je bronvermelding?

Veelgestelde vragen