program:
Node* findMidNode(Node* start)
{
Node* midNode = start;
while(start != NULL || start->next != NULL)
{
midNode = midNode->next;
start = start->next->next;
}
return midNode;
}
Algorithm:
midNode is pointing to next of the midNode in every iteration and
start is pointing to next to next (i.e., increment 2) of start in every iteration.
so, when start is at the end, then midNode is pointing to middle element of the List.
Tuesday, March 30, 2010
reverse the linked list using recursive method
Algorithm:
Here the algorithm is, pointing p and q to the next elements in every iteration. Once we reached the end, then start pointing to back, i.e.., q->next = p;
//structure
struct Node
{
int data;
struct Node* next;
};
//global variables...
struct node* start = NULL;
struct node* p = NULL;
struct node* q = NULL;
//assusme that we have added some elements into the List, and start
//is pointing to first element in the list
reverseList(start);
void reverseList(struct Node* node)
{
if (node->next == NULL)
break;
p = node->next;
q = p->next;
if (q->next != NULL)
{
reverseList(q);
}
else
{
start->next = NULL;
start = q;
}
q->next = p;
}
how to print the bits in a reverse order (without using any variable)
Suppose, 123 is represented as 01111011. But we have to print it in reverse order, i.e.., 11011110
program
int main()
{
int i = 123;
while(i!=0)
{
if (i & 1)
printf("1");
else
printf("0");
i >>= 1;
}
}
program
int main()
{
int i = 123;
while(i!=0)
{
if (i & 1)
printf("1");
else
printf("0");
i >>= 1;
}
}
virtual destructor purpose
Que: If there is a single virtual function in a class, one of the function of that class shold be virtual. what is that function?
Ans: Destructor
purpose of virtual destructor:
We know that, virtual destructor will help to call correct Version of destructor in dynamic binding.
class Base
{
private:
int x;
public:
virtual int getValue() {return x;}
};
class Derived : public Base
{
private:
int y;
public:
int getValue() {return y;}
};
int main()
{
Base* b = new Der();
delete b;
}
In the above program, do we need a virtual destructor?
No, we don't need a virtual destructor in the above program.
Here, there is no problem in deleting delete b;
suppose the Derived class is some thing like this
class Derived : public Base
{
private:
int y;
int* p;
public:
int getValue() {return y;}
};
then the problem will occur. What is this problem.
The problem is ....
Assume that you have allotted memory for "int *p" and the destructor is not virutal
in the case Base* pb = new Derived( );. The destructor of base class gets called
and there is no way we can free the memory allotted for p inside derived.
So, a memory leak.
Yes, to avoid memory leaks we have to use virtual destructor.
Not only for memory leaks, but also used to releasing Filehandles/resources and unlocking mutexes.
Ans: Destructor
purpose of virtual destructor:
We know that, virtual destructor will help to call correct Version of destructor in dynamic binding.
class Base
{
private:
int x;
public:
virtual int getValue() {return x;}
};
class Derived : public Base
{
private:
int y;
public:
int getValue() {return y;}
};
int main()
{
Base* b = new Der();
delete b;
}
In the above program, do we need a virtual destructor?
No, we don't need a virtual destructor in the above program.
Here, there is no problem in deleting delete b;
suppose the Derived class is some thing like this
class Derived : public Base
{
private:
int y;
int* p;
public:
int getValue() {return y;}
};
then the problem will occur. What is this problem.
The problem is ....
Assume that you have allotted memory for "int *p" and the destructor is not virutal
in the case Base* pb = new Derived( );. The destructor of base class gets called
and there is no way we can free the memory allotted for p inside derived.
So, a memory leak.
Yes, to avoid memory leaks we have to use virtual destructor.
Not only for memory leaks, but also used to releasing Filehandles/resources and unlocking mutexes.
Saturday, March 27, 2010
To find the kth largest element in a Binary Search Tree, BST
1. Count the number of elements in a tree.
int nodeCount = GetNumberOfNodes(struct Node* root);
2. Traverse the tree using inorder traversal
In-order traversal is Left, Root node, Right
InorderTraversal(root, 0, count);
Root = root node of the tree
nodeCount = number of nodes in a tree
currNode = current node number
K = kth largest element in a tree
struct Node
{
int data;
struct Node* left;
struct Node* right;
};
InorderTraversal(struct Node* root, int currNode, int nodeCount)
{
if (nodeCount - currNode == k)
{
printf(“%d th largest element is: %d”, k, root->data);
break;
}
if (node->left != NULL)
InorderTraversal(node->left);
else if(node->right != NULL)
InorderTraversal(node->right);
}
Note: By traversing the BST tree using in-order, means we are traversing the tree in a sorted order. So that we will get the sorted array. Therefore, kth largest element is arr[numberOfNodesInaBST-k]; and kth smallest element is
Arr[k];
int nodeCount = GetNumberOfNodes(struct Node* root);
2. Traverse the tree using inorder traversal
In-order traversal is Left, Root node, Right
InorderTraversal(root, 0, count);
Root = root node of the tree
nodeCount = number of nodes in a tree
currNode = current node number
K = kth largest element in a tree
struct Node
{
int data;
struct Node* left;
struct Node* right;
};
InorderTraversal(struct Node* root, int currNode, int nodeCount)
{
if (nodeCount - currNode == k)
{
printf(“%d th largest element is: %d”, k, root->data);
break;
}
if (node->left != NULL)
InorderTraversal(node->left);
else if(node->right != NULL)
InorderTraversal(node->right);
}
Note: By traversing the BST tree using in-order, means we are traversing the tree in a sorted order. So that we will get the sorted array. Therefore, kth largest element is arr[numberOfNodesInaBST-k]; and kth smallest element is
Arr[k];
Friday, March 26, 2010
integer to hexa conversion
int integer = 123;
itox (int integer)
{
while (integer > 0)
{
int temp = integer %16;
if (temp < 0)
break;
itox (integer /16);
if (temp >= 0 && temp <= 9)
printf(“%d”, temp);
else if (temp >= 10 && temp <= 15)
printf(“%c”, ‘A’ + temp – 10);
else
;//nothing to do
}
}
itox (int integer)
{
while (integer > 0)
{
int temp = integer %16;
if (temp < 0)
break;
itox (integer /16);
if (temp >= 0 && temp <= 9)
printf(“%d”, temp);
else if (temp >= 10 && temp <= 15)
printf(“%c”, ‘A’ + temp – 10);
else
;//nothing to do
}
}
Thursday, March 25, 2010
How can we display the bits of a integer without using array, string and even a single variable
Here is the answer....
int main()
{
int i = 123;
while(i!=0)
{
if (i & (1<<31))
printf("1");
else
printf("0");
i <<= 1;
}
}
int main()
{
int i = 123;
while(i!=0)
{
if (i & (1<<31))
printf("1");
else
printf("0");
i <<= 1;
}
}
'Debug' works fine, but not in 'Release' and vice-versa.
Sometimes it is possible that some operation works fine in Debug mode and not in Release mode and vice-versa.
Here are the some common errors that leads us to the above problem.
1. In general, the default value of an uninitialzed variable is garbage in Debug and zero in Release .
So, the following code works fine in Debug and not in Release .
int enabled;
if( enabled )
DoSomeThing();
So, Please initialise when a variable is declared.
2. One of the main difference in Debug and Release is Optimization.
We can set this from Properties -> C/C++ -> Optimization.
By default, optimization is turned off in the Debug configuration of a Visual C++ program and turned on in the Release configuration.
Sometimes, however, a bug may appear only in an optimized version of a program. In that case, we must debug the optimized code.
Here are some techniques to test the application
- Set the mode Debug and Optimization level is should be same as Release and now run the application.
- Set the Release mode and change the Optimization level and run the application.
3. Check the _DEBUG macro in the code. where it is used and How it is used.
Look for any code that is inside of a "#ifdef _DEBUG" wrapper that might be needed in the Release version.
4. Sometimes Warnings are important.
When starting a project turn the warning level to "Level 3" or "Level 4" and make sure the code compiles with minimal or no warnings.
5. Don't Remove Needed Code in Release
There are several macros that are commonly used that are completely removed when you compile in release mode.
For example ASSERT macros and TRACE macros are quietly removed in Release builds. Any code that may be required,
which was to be executed in an ASSERT or TRACE command will also be removed.
6. Make the Debug mode more like Release
Make the Debug mode more likely Release mode and Reproduce the problem in Debug mode.
The main difference is _DEBUG macro is defined in Debug and NDEBUG macro is defined in Release.
So, define NDEBUG in Debug instead of _DEBUG.
7. Generally, we can debug the code in Debug but not in Release.
But we can also debug the code in Release by setting "Generate Debug Info" in Project Properties page -> Link -> Generate Debug Info
Here are the some common errors that leads us to the above problem.
1. In general, the default value of an uninitialzed variable is garbage in Debug and zero in Release .
So, the following code works fine in Debug and not in Release .
int enabled;
if( enabled )
DoSomeThing();
So, Please initialise when a variable is declared.
2. One of the main difference in Debug and Release is Optimization.
We can set this from Properties -> C/C++ -> Optimization.
By default, optimization is turned off in the Debug configuration of a Visual C++ program and turned on in the Release configuration.
Sometimes, however, a bug may appear only in an optimized version of a program. In that case, we must debug the optimized code.
Here are some techniques to test the application
- Set the mode Debug and Optimization level is should be same as Release and now run the application.
- Set the Release mode and change the Optimization level and run the application.
3. Check the _DEBUG macro in the code. where it is used and How it is used.
Look for any code that is inside of a "#ifdef _DEBUG" wrapper that might be needed in the Release version.
4. Sometimes Warnings are important.
When starting a project turn the warning level to "Level 3" or "Level 4" and make sure the code compiles with minimal or no warnings.
5. Don't Remove Needed Code in Release
There are several macros that are commonly used that are completely removed when you compile in release mode.
For example ASSERT macros and TRACE macros are quietly removed in Release builds. Any code that may be required,
which was to be executed in an ASSERT or TRACE command will also be removed.
6. Make the Debug mode more like Release
Make the Debug mode more likely Release mode and Reproduce the problem in Debug mode.
The main difference is _DEBUG macro is defined in Debug and NDEBUG macro is defined in Release.
So, define NDEBUG in Debug instead of _DEBUG.
7. Generally, we can debug the code in Debug but not in Release.
But we can also debug the code in Release by setting "Generate Debug Info" in Project Properties page -> Link -> Generate Debug Info
Avoiding if condition in CUDA
The branching is a most time consume operation in CUDA ( in general as well).
For example, See below CPU code and GPU code ...
CPU code:
void CPUCode( int* input, int* output, int length)
{
for ( int i = 0; i < length; ++i )
{
output[ i ] = input[ i ] + 2 * input[ i + 1 ];
}
}
GPU code:
__global__
void GPUCode( int* input, int* output, int length)
{
int idx = __umul24( blockDim.x, blockIdx.x) + threadIdx.x;
if ( idx < length )
{
output[ idx ] = input[ idx ] + 2 * input[ idx + 1 ];
}
}
In the above GPU code, there is a if condition and which is executed by each thread. If every thread should execute the same instruction at the same time, then that execution is very fast. i.e., the kernel code (or __global__ function code) should be serial, no branching in side it.
Look at the modified GPU version code...
__global__
void GPUCode( int* input, int* output, int length)
{
int idx = __umul24( blockDim.x, blockIdx.x) + threadIdx.x;
idx = max( idx, 0);
idx = min( idx, length);
output[ idx ] = input[ idx ] + 2 * input[ idx + 1 ];
}
No branching in the above modifed GPU version code. The Kernel code is serial. Every thread is executes the same instruction at a time. This type of code is executes very fast on GPU.
The above techinique is applicable for general CPU code as well.
For example, See below CPU code and GPU code ...
CPU code:
void CPUCode( int* input, int* output, int length)
{
for ( int i = 0; i < length; ++i )
{
output[ i ] = input[ i ] + 2 * input[ i + 1 ];
}
}
GPU code:
__global__
void GPUCode( int* input, int* output, int length)
{
int idx = __umul24( blockDim.x, blockIdx.x) + threadIdx.x;
if ( idx < length )
{
output[ idx ] = input[ idx ] + 2 * input[ idx + 1 ];
}
}
In the above GPU code, there is a if condition and which is executed by each thread. If every thread should execute the same instruction at the same time, then that execution is very fast. i.e., the kernel code (or __global__ function code) should be serial, no branching in side it.
Look at the modified GPU version code...
__global__
void GPUCode( int* input, int* output, int length)
{
int idx = __umul24( blockDim.x, blockIdx.x) + threadIdx.x;
idx = max( idx, 0);
idx = min( idx, length);
output[ idx ] = input[ idx ] + 2 * input[ idx + 1 ];
}
No branching in the above modifed GPU version code. The Kernel code is serial. Every thread is executes the same instruction at a time. This type of code is executes very fast on GPU.
The above techinique is applicable for general CPU code as well.
Viewing all elements of dynamically allocated arrays In VC++
Normally when we allocate an array dynamically, and when we look at the variable in the watch window you can only see the first element. To view the entire array just type the , .
Example:
void main( void )
{
const int SIZE = 10;
int *intArray = 0;
intArray = (int *)malloc( sizeof( int ) * SIZE ) ;
for( int x = 0; x < SIZE; x++ )
{
intArray[x] = 10 + x;
}
return 0; // put a break point here, so that control stops here and
// we can see the "intArray" contents in "Watch" window
}
Watch Window: Debug -> Windows -> Watch ->
to see intArray in the watch window type
intArray, 10
and now we can see all the 10 elements of "intArray" in watch windows.
Example:
void main( void )
{
const int SIZE = 10;
int *intArray = 0;
intArray = (int *)malloc( sizeof( int ) * SIZE ) ;
for( int x = 0; x < SIZE; x++ )
{
intArray[x] = 10 + x;
}
return 0; // put a break point here, so that control stops here and
// we can see the "intArray" contents in "Watch" window
}
Watch Window: Debug -> Windows -> Watch ->
to see intArray in the watch window type
intArray, 10
and now we can see all the 10 elements of "intArray" in watch windows.
Detecting Memory leak in Visual Studio (Debug mode)
Microsoft Visual Studio (2003/2005) informs about memory leakes in a module by setting the "leak check" bit, _CRTDBG_LEAK_CHECK_DF, to the debugger.
The memory leak information is printed on "Output" window.
[ This advantage is in "Debug" mode only. ]
For example,
int _tmain()
{
int* ptr = NULL;
ptr = (int*)malloc(sizeof(int)*20);
for( int i=0; i<20; ++i )
ptr[i] = i;
#ifndef NDEBUG
int flag = _CrtSetDbgFlag(_CRTDBG_REPORT_FLAG); // Get current flag
flag |= _CRTDBG_LEAK_CHECK_DF; // Turn on leak-checking bit
_CrtSetDbgFlag(flag); // Set flag to the new value
#endif
//free(ptr);
printf("\n\nPress Enter...");
getch();
return 0;
}
If we run the above program, we can see the memory leak information in an "Output" window as follows...
Detected memory leaks!
Dumping objects ->
{56} normal block at 0x02D42D48, 80 bytes long.
Data: < > 00 00 00 00 01 00 00 00 02 00 00 00 03 00 00 00
Object dump complete.
Of course, the above information is not describes where the mem leak is present and in which file etc.., but the information is valid.
The memory leak information is printed on "Output" window.
[ This advantage is in "Debug" mode only. ]
For example,
int _tmain()
{
int* ptr = NULL;
ptr = (int*)malloc(sizeof(int)*20);
for( int i=0; i<20; ++i )
ptr[i] = i;
#ifndef NDEBUG
int flag = _CrtSetDbgFlag(_CRTDBG_REPORT_FLAG); // Get current flag
flag |= _CRTDBG_LEAK_CHECK_DF; // Turn on leak-checking bit
_CrtSetDbgFlag(flag); // Set flag to the new value
#endif
//free(ptr);
printf("\n\nPress Enter...");
getch();
return 0;
}
If we run the above program, we can see the memory leak information in an "Output" window as follows...
Detected memory leaks!
Dumping objects ->
{56} normal block at 0x02D42D48, 80 bytes long.
Data: < > 00 00 00 00 01 00 00 00 02 00 00 00 03 00 00 00
Object dump complete.
Of course, the above information is not describes where the mem leak is present and in which file etc.., but the information is valid.
Trimming float value
Some times we may need a float value 2.345 instead of complete float value 2.345768.
So we need to trim the float value. Below is the function that can do for us
function name: trimFloat( )
param1: float f; the original float value , i.e., 2.345768
param2: int decimals; how many number of digits we want after the point (.)
return: It returns the trimmed float value
float trimFloat(float f, int decimals)
{
char valueStr[20] = {NULL};
sprintf(valueStr, "%f", f);
int i=0, d=0, sign=0;
float val = 0.0f;
float pow = 1.0f;
int length = strlen(valueStr);
if (length <= 0)
return 0;
//omit spaces if any
for (i = 0; isspace(valueStr[i]); ++i)
continue;
//check weather flost value is negative or positive
sign = (valueStr[i] == '-') ? -1 : 1;
if (valueStr[i] == '-' || valueStr[i] == '+')
++i;
//get the value and store it in val
for (val = 0.0f; isdigit(valueStr[i]); ++i)
val = 10.0f * val + valueStr[i] - '0';
//if '.' is occur, just traverse it
if (valueStr[i] == '.')
++i;
//now "d for (pow = 1.0f, d=0; isdigit(valueStr[i]) && d {
val = 10.0f * val + valueStr[i] - '0';
pow *= 10;
}
//return result
return sign * val / pow;
}
The same function we can use for trimming double ( when we trimming double, then 'val' and 'pow' are double data types in the above program).
Note:
I have tested it on VS2003 and its giving exact values i.e., 2.34 / 2.345 / 2.3457 depending on 'decimals' variable.
In VS2005, its giving 2.340000 / 2.345000 / 2.345700.
So we need to trim the float value. Below is the function that can do for us
function name: trimFloat( )
param1: float f; the original float value , i.e., 2.345768
param2: int decimals; how many number of digits we want after the point (.)
return: It returns the trimmed float value
float trimFloat(float f, int decimals)
{
char valueStr[20] = {NULL};
sprintf(valueStr, "%f", f);
int i=0, d=0, sign=0;
float val = 0.0f;
float pow = 1.0f;
int length = strlen(valueStr);
if (length <= 0)
return 0;
//omit spaces if any
for (i = 0; isspace(valueStr[i]); ++i)
continue;
//check weather flost value is negative or positive
sign = (valueStr[i] == '-') ? -1 : 1;
if (valueStr[i] == '-' || valueStr[i] == '+')
++i;
//get the value and store it in val
for (val = 0.0f; isdigit(valueStr[i]); ++i)
val = 10.0f * val + valueStr[i] - '0';
//if '.' is occur, just traverse it
if (valueStr[i] == '.')
++i;
//now "d
val = 10.0f * val + valueStr[i] - '0';
pow *= 10;
}
//return result
return sign * val / pow;
}
The same function we can use for trimming double ( when we trimming double, then 'val' and 'pow' are double data types in the above program).
Note:
I have tested it on VS2003 and its giving exact values i.e., 2.34 / 2.345 / 2.3457 depending on 'decimals' variable.
In VS2005, its giving 2.340000 / 2.345000 / 2.345700.
Why Prefix is fast than Postfix
Prefix operation is fast compared to postfix operation.
Lets see how postfix and prefix works.
// postfix
iterator operator++(int)
{
iterator _Tmp = *this;
++*this;
return (_Tmp);
}
// prefix
iterator& operator++()
{
_Ptr = _Acc::_Next(_Ptr);
return (*this);
}
note: one integer dummy variable is used in postfix operation in order to distinguish weather
it is a postfix operation or prefix operation. This dummy variable is never used.
(look at prefix / postfix overloading).
Difference between postfix and prefix:
- One extra object, _Tmp, is necessary in postfix operation
- One extra "copy constructor" is called because of the below line in postfix operation
iterator _Tmp = *this;
Now, lets see two cases
case 1:
for( int i = 0; i < 200; i++)
out[ i ] = in[ i ];
In this case, we do not get the performance even if we are using prefix.
Just one extra integer variable is created and no constructor is called.
case 2:
vector the_vector;
vector::iterator the_iterator;
the_iterator = the_vector.begin();
for( ; the_iterator != the_vector.end(); the_iterator++)
// some operation here
Here, It is sure that we should get the performance if we use prefix instead of postfix.
Just set the iterator to point to the next object in the list.
So, weather it is a integer or a object in a for loop, we should use prefix in order to write the best program.
Lets see how postfix and prefix works.
// postfix
iterator operator++(int)
{
iterator _Tmp = *this;
++*this;
return (_Tmp);
}
// prefix
iterator& operator++()
{
_Ptr = _Acc::_Next(_Ptr);
return (*this);
}
note: one integer dummy variable is used in postfix operation in order to distinguish weather
it is a postfix operation or prefix operation. This dummy variable is never used.
(look at prefix / postfix overloading).
Difference between postfix and prefix:
- One extra object, _Tmp, is necessary in postfix operation
- One extra "copy constructor" is called because of the below line in postfix operation
iterator _Tmp = *this;
Now, lets see two cases
case 1:
for( int i = 0; i < 200; i++)
out[ i ] = in[ i ];
In this case, we do not get the performance even if we are using prefix.
Just one extra integer variable is created and no constructor is called.
case 2:
vector
vector
the_iterator = the_vector.begin();
for( ; the_iterator != the_vector.end(); the_iterator++)
// some operation here
Here, It is sure that we should get the performance if we use prefix instead of postfix.
Just set the iterator to point to the next object in the list.
So, weather it is a integer or a object in a for loop, we should use prefix in order to write the best program.
Subscribe to:
Posts (Atom)