Written by students who passed Immediately available after payment Read online or as PDF Wrong document? Swap it for free 4.6 TrustPilot
logo-home
Class notes

computer vision and deep learning

Rating
-
Sold
-
Pages
37
Uploaded on
16-03-2023
Written in
2022/2023

computer vision and deep learning topics which cover the mostly the visual features and representation parts and methods with brief explanation and examples. feature extraction and image processing concept.

Institution
Course

Content preview

CVDL
Visual Features &
Representation
Unit - 2


Notes


Edge:

An Edge in an image is a sharp variation of the intensity function. In grayscale images this
applies to the intensity or brightness of pixels. In color images it can also refer to sharp
variations of color. An edge is distinguished from noise by possessing long range structure.
Properties of edges include gradient and orientation.

Edge Detection:

Edge detection is a technique of image processing used to identify points in a digital image with
discontinuities, simply to say, sharp changes in the image brightness. These points where the
image brightness varies sharply are called the edges (or boundaries) of the image.

Blob :

A Blob, in a sense, is anything that is considered a large object or anything bright in a dark
background, in images, we can generalize it as a group of pixel values that forms a somewhat
colony or a large object that is distinguishable from its background. Using image processing, we
can detect such blobs in an image.

Corner :

A corner is a point whose local neighborhood stands in two dominant and different edge
directions. In other words, a corner can be interpreted as the junction of two edges, where an
edge is a sudden change in image brightness. Corners are the important features in the image,
and they are generally termed as interest points which are invariant to translation, rotation,
and illumination.

,Corner Detection:

Corner detection is an approach used within computer vision systems to extract certain kinds of
features and infer the contents of an image. Corner detection is frequently used in motion
detection, image registration, video tracking, image mosaicing, panorama stitching, 3D
reconstruction and object recognition. Corner detection overlaps with the topic of interest point
detection.

Scale-space:

Real world objects are meaningful only at a certain scale. You might see a sugar cube perfectly
on a table. But if looking at the entire milky way, then it simply does not exist. This multi-scale
nature of objects is quite common in nature. And a scale space attempts to replicate this
concept on digital images.

Concept of Edge Detection

The concept of edge detection is used to detect the location and presence of edges by making
changes in the intensity of an image. Different operations are used in image processing to
detect edges. It can detect the variation of grey levels but it quickly gives response when a noise
is detected. In image processing, edge detection is a very important task. Edge detection is the
main tool in pattern recognition, image segmentation and scene analysis. It is a type of filter
which is applied to extract the edge points in an image. Sudden changes in an image occurs
when the edge of an image contour across the brightness of the image.

In image processing, edges are interpreted as a single class of singularity. In a function, the
singularity is characterized as discontinuities in which the gradient approaches are infinity.

As we know that the image data is in the discrete form so edges of the image are defined as the
local maxima of the gradient. lll

Mostly edges exits between objects and objects, primitives and primitives, objects and
background. The objects which are reflected back are in discontinuous form. Methods of edge
detection study to change a single pixel of an image in gray area.

Edge detection is mostly used for the measurement, detection and location changes in an image
gray. Edges are the basic feature of an image. In an object, the clearest part is the edges and
lines. With the help of edges and lines, an object structure is known. That is why extracting the
edges is a very important technique in graphics processing and feature extraction.

The basic idea behind edge detection is as follows:

1. To highlight local edge operator use edge enhancement operator.

, 2. Define the edge strength and set the edge points.

There are 5 edge detection operators they are as follows:

1. Sobel Edge Detection Operator

The Sobel edge detection operator extracts all the edges of an image, without worrying about
the directions. The main advantage of the Sobel operator is that it provides differencing and
smoothing effect.




Sobel edge detection operator is implemented as the sum of two directional edges. And the
resulting image is a unidirectional outline in the original image.

Sobel Edge detection operator consists of 3x3 convolution kernels. Gx is a simple kernel and Gy
is rotated by 90°

These Kernels are applied separately to input image because separate measurements can be
produced in each orientation i.e Gx and Gy.

Following is the gradient magnitude:




As it is much faster to compute An approximate magnitude is computed:




2. Robert's cross operator

Robert's cross operator is used to perform 2-D spatial gradient measurement on an image which
is simple and quick to compute. In Robert's cross operator, at each point pixel values represents
the absolute magnitude of the input image at that point.

Robert's cross operator consists of 2x2 convolution kernels. Gx is a simple kernel and Gy is
rotated by 90o

, Following is the gradient magnitude:



As it is much faster to compute An approximate magnitude
is computed:




3. Laplacian of Gaussian

The Laplacian of Gaussian is a 2-D isotropic measure of an image. In an image, Laplacian is the
highlighted region in which rapid intensity changes and it is also used for edge detection. The
Laplacian is applied to an image which is been smoothed using a Gaussian smoothing filter to
reduce the sensitivity of noise. This operator takes a single grey level image as input and
produces a single grey level image as output.

Following is the Laplacian L(x,y) of an image which has pixel intensity value I(x, y).




In Laplacian, the input image is represented as a set of discrete pixels. So discrete convolution
kernel which can approximate second derivatives in the definition is found.

3 commonly used kernels are as following:

Written for

Institution
Course

Document information

Uploaded on
March 16, 2023
Number of pages
37
Written in
2022/2023
Type
Class notes
Professor(s)
Rahul
Contains
All classes

Subjects

$6.99
Get access to the full document:

Wrong document? Swap it for free Within 14 days of purchase and before downloading, you can choose a different document. You can simply spend the amount again.
Written by students who passed
Immediately available after payment
Read online or as PDF

Get to know the seller
Seller avatar
raghavsurya74

Also available in package deal

Get to know the seller

Seller avatar
raghavsurya74 GHRIET
Follow You need to be logged in order to follow users or courses
Sold
-
Member since
3 year
Number of followers
0
Documents
4
Last sold
-

0.0

0 reviews

5
0
4
0
3
0
2
0
1
0

Recently viewed by you

Why students choose Stuvia

Created by fellow students, verified by reviews

Quality you can trust: written by students who passed their tests and reviewed by others who've used these notes.

Didn't get what you expected? Choose another document

No worries! You can instantly pick a different document that better fits what you're looking for.

Pay as you like, start learning right away

No subscription, no commitments. Pay the way you're used to via credit card and download your PDF document instantly.

Student with book image

“Bought, downloaded, and aced it. It really can be that simple.”

Alisha Student

Working on your references?

Create accurate citations in APA, MLA and Harvard with our free citation generator.

Working on your references?

Frequently asked questions