Algorithm...
Memorandum for NPC problem
This Wednesday I visited professor during his office hour after a lecture and lab demo. I learned a lot and should write them
down immediately. However, I was too tired and still had a lecture in the evening. Therefore, I write them down now.
1. Is polynomial reduction function mapping problem instances between two problems in an Isomorphism manner?
No. Actually the phrase of "mapping problem instances between two problems" is an oversimplification of my own which is not
accurate and I will discuss them in a short time.
2. The second question is actually similar to the first one: The function is not even ONTO if you still think in my
oversimplified logic understanding.
3. The definition is like this:
The following is something I understand and partly copied from textbook.
Let's use binary encoding to represent our problems. The polynomial reduction function is defined:
f: star({0,1})--->star({0,1}) where star(L) stands for Kleene star of language L.
So, the f is nothing but a string of 0's and 1's.
Then we have the definition of decision problems:
L={x in star({0,1}): Q(x)=1}
The problem is a language such that there exists an algorithm which can accept it. And what's more, the decision problem is that
the algorithm not only accepts it if x is in the language, but also rejects it if x is not in the language.
Is it meaningful to ask if polynomial reduction function is ONTO or not? Of course not! star({0,1}) can never be covered.
4. In the textbook, it gives a method to prove that a language L is NP-complete.
a) Prove L in NP;
b) Select a known NP-complete language L';
c) Describe an algorithm that computes a function f mapping every instance x in star({0,1}) of L' to an instance f(x) of L.
d) Prove that the function f satisfies x in L' if and only if f(x) in L for all x in star({0,1});
e) Prove that the algorithm computing f runs in polynomial time.
My intuition is that if in d), it is enough for us to prove f satisfies that if x in L' then f(x) in L for all x in star({0,1}).
Because it says that for all instance in L' we can always reduce them into an instance in L. Therefore L is AT LEAST as difficult
as L'. Hence, L is a NP-complete problem. Why do we bother to prove the converse proposition?
The answer is still within the definition of formal language. The language is simply defined as strings of 0's,1's. We need to
make sure the strings do make sense to language L'. ( I am a kind of convinced by professor's explanation, but begin to doubt now.)
5. A common mistake is to assume N-color-map problem is exactly biggest size of clique problem.
6. I try to persuade myself that some proof of NP-complete problem would be easier to choose appropriate known NPC problem. I just
wonder if there is any sequence in NPC. Professor says no. Then, by logic if I can reduce L to L', I should also be able to reduce
L' to L if L, L' are all NPC problems.
Reduce, Reduce, Reduce
...
The answer:
It is a illusion to view the LHS of inequality is a special case compared with RHS. On the contrary, LHS is a more general case.
Can you reduce small to big? Surely not. It is very misleading the direction of "<p" which implies the RHS is bigger which is
completely nonsense. Observe that every fifty-fifty is a special case of subset-sum while subset-sum may not be able to transfer
to a fifty-fifty.
To show the evaluation function of A* algorithm is monotonely non-decreasing.
Definition of A* algorithms is:
Consider evaluation function f(n)=g(n)+h(n) where g(n) is the actual cost from start state to state n and h(n) is the always less than or equal to the cost of the "minimal" path from n to goal state, then such "best-first search" algorithms is a A* algorithm.
proof: consider two state i,j where j is descendant of i. By definition of A* algorithm,
h(i)-h(j)<=cost(i,j) (a and since g(n) is actual cost from start state to n, g(n) is always bigger than or equal to cost of minimal path from start state to state n. Therefore, cost(start, i)<=g(i) and cost(start, j)<=g(j), then we know, cost(i,j)=cost(start,j)-cost(start,i)<=g(j)-g(i) (b
then f(j)-f(i)=g(j)-g(i)+h(j)-h(i)>=
cost(i,j)-(h(i)-h(j))>=0
Recall the definition of monotonicity of heuristic is like this:
Definition of monotonicity of heuristics is if:
1. For all state i, j where j is descendant of i, h(i)-h(j)<=cost(i, j) where cost(i,j) is the actual cost from state i to state j;
2. h(goal)=0 which means the heuristic function for goal state is 0;
It is raining hard while I am getting the textbook back.....
Quoting nick huang <nickhuang99@hotmail.com>:
> hi,
>
>
> I spent a whole day in thinking about the proof of "monotone heuristic
> implying A* algorithm" on textbook of page142(4th edition), I feel a
bit
> confused and suspect there is a possible flaw in this proof.
Your feeling is correct.
Perhaps Luger's txtbook is not well written. I had the same problem
when I
first read this textbook.
> On the text, it uses h(s1)-h(s2)<=cost(s1,s2) as the property of
monotone.
OK, that is a definition. And by this definition, monotonicity will be
a
special case for admissibility. (sorry, I forget what is admissibility.)
> But by definition of monotone heuristic, this "cost" function is the
"actual"
Yes. "actual cost" is prinited there. But Luger might want to say "any
possible actual cost".
I guess, "any possbile actual cost" is the right word to put there.
See "One way to describing the monotone property is that the search
space is
everywhere locally consistent....", and the next sentence "The
difference
between the heuristic meansure for a state and any one of its successors
is
bound by the actual cost of going ..."
All these statements give hints that Luger is talking about "any
possibly path
from a node to its successor" instead of a fixed actual path.
> cost from state s1 to s2 instead of cost of "mininal path" from s1 to
s2.
> Since we haven't proved that monotone heuristic gurantees to produce
optimal
> solution, we cannot claim that the "actual cost" is the cost of
"minimal
> path". However, by definition of A* algorithm, only those algorithms
such
> that the heuristic function h(n) is smaller than or equal to the cost
of
> "minimal path" from n to the goal is considered as A* algorithm. So, I
guess
> the textbook is using "actual cost" to represent "cost of minimal
path" to
> prove.
If my conjecture above is right, then "minimal path" is just one kind
of path
out of many possibility and is already covered in the definition of
"monotonicity".
>But this is not correct since actual cost is always bigger than or
> equal to cost of minimal path, so this doesn't prove
h(s1)-h(s2)<=cost(s1,s2)
> of minimal path.
>
> By the way, if this proof is valid, we even don't have to employ such
> summation procedure, just by definition of monotone of
> h(n1)-h(n2)<=cost(n1,n2)
> plug in n2=goal state and get for any state n1 such that
> h(n1, goal)<=cost(n1, goal) to reach the conclusion.
I can not understand the above two lines. Do you misprint something
there?
>
> Surely you wouldn't agree this proof, would you?
>
> Thank you for your time and awaiting your reply promptly,
>
> have a nice day,
>
> nick huang/qingzhe huang
>
go from nick
hi, Thank you for your message and detailed explanation! 1. Your conjecture is interesting, but I am not convinced yet because a) If "actual cost of any possible path" is what Luger wants to express, he probably would invent a new concept for this because it is not so obvious. b) This will make A* algorithm and monotonicity a bit similar, see A* only requires each state to compare with goal state and monotonicity requires each two state to compare. Is this a bit too close to invent two concept? c) The line you said you don't follow is like this: if the "cost" in monotonicity is referring to cost of minimal path, then the proof is quite simple. Even simpler than what is showed in text on page 142. You see, you only need to prove "for all states n, if h(n)-h(goal)<=cost(n, goal), then it is A* algorithm." (Please check definition of A* algorithm) However, by definition of monotone, h(goal) is 0. So, this is always true which means we already proves monotone is A* algorithm. But this seems too easy and we even don't need the summation procedure. 2. A* and monotone can be drawn as graph like this:
(evaluation function f=h(n)+g(n) | -------------| both A* and monotone converge at goal state | ----------- ------ | A* |------- ----------- | | ------ | mono |------ | | | |_________________________________ (state space n) (start state) (goal state)
If the y-axis is f=g(n)+h(n) and x-axis is state space n, then both curve of A* and monotone is non-decreasing(This can be proved easily.) and they both converge at goal state because in case of A* f(goal)=h(goal)+g(goal) where h(goal)=0 since the minimal path from goal to goal is 0 and h(goal)<=0, and in case of monotone, f(goal)=h(goal)+g(goal) where h(goal)=0 by definition. So the two line converge at coordinate of (goal, g(goal)). However, the starting point may not be same because g(start) are always 0, and h(start) in case of A* is different from h(start) in monotone. I have a feeling that we might be able to prove by contradiction that the two curve is coincided even at start point, but I tried and failed, maybe you can do that. 3. Nice to discuss with you. have a nice day,
come from DY
> hi, > > Thank you for your message and detailed explanation! > 1. Your conjecture is interesting, but I am not convinced yet because > a) If "actual cost of any possible path" is what Luger wants to express, he > probably would invent a new concept for this because it is not so obvious. > b) This will make A* algorithm and monotonicity a bit similar, see A* only > requires each state to compare with goal state and monotonicity requires each > two state to compare. Is this a bit too close to invent two concept? The two concepts are much related. Viewed historically, A* comes first, then comes monotonicity. The latter is a special case of and implies the former; while the former is more general than the latter. Monotonicy is nothing but a cheap toy played by its creators and some writers like Luger. By the way, the summation procedure is kind of a show-off, I think. > c) The line you said you don't follow is like this: if the "cost" in > monotonicity is referring to cost of minimal path, then the proof is quite > simple. Even simpler than what is showed in text on page 142. You see, you > only need to prove "for all states n, if h(n)-h(goal)<=cost(n, goal), then > it is A* algorithm." (Please check definition of A* algorithm) Yes, I believe this line of reasoning is acceptable (because monotonicty means 'everywhere locally admissible') . >However, by > definition of monotone, h(goal) is 0. So, this is always true which means we > already proves monotone is A* algorithm. But this seems too easy and we even > don't need the summation procedure. No summuation procedure is really needed in order to prove that monotonicity implies A*. Since Luger sticks to the 'local admisibility' for the concept of monotonicity, so a summation procedure would make 'locality' more clear to readers. > 2. A* and monotone can be drawn as graph like this: > (evaluation function f=h(n)+g(n) > | -------------| both A* and > monotone converge at goal state > | ----------- ------ | > A* |------- ----------- | > | ------ | > mono |------ | > | | > |________________________|_________ (state space n) > (start state) (goal state) > If the y-axis is f=g(n)+h(n) and x-axis is state space n, then both curve of > A* and monotone is non-decreasing(This can be proved easily.) and they both > converge at goal state because in case of A* f(goal)=h(goal)+g(goal) where > h(goal)=0 since the minimal path from goal to goal is 0 and h(goal)<=0, and > in case of monotone, f(goal)=h(goal)+g(goal) where h(goal)=0 by definition. > So the two line converge at coordinate of (goal, g(goal)). > However, the starting point may not be same because g(start) are always 0, > and h(start) in case of A* is different from h(start) in monotone. > I have a feeling that we might be able to prove by contradiction that the two > curve is coincided even at start point, but I tried and failed, maybe you can > do that. I don't think I could do that. By the way, I am interested in finding a monotonicity function for exercise 2 on page 156(Version IV). If you know that, or you know someone know that, please let me know that answer. I will be very much interested in that. > > 3. Nice to discuss with you. Best wishes, and have a nice day.
Camera: It is essential to have a flexible and realistic control of flight in a shooting game. Otherwise player might be unable to detect those fast-moving targets. We supply third-person-view camera apart from first-person-view camera in order to help player re-locate from lost in flight and easily detect moving targets because without much sense of coordinate pilot can be easily lose sense of location. As the camera is closely connected with flight control, the algorithm will be introduced as following.
Flight Control: We supply six degrees freedom of flight control. That is forward/backward with respect of current heading of chopper, left/right perpendicular to current heading of chopper, upward/downward perpendicular to current heading of chopper, yaw, pitch, roll. Therefore player can move in any arbitrary position and posture.
Design Approach: At beginning, I want to use similar idea of OpenGL function "gluLookAt" which use two vector plus one coordinate to represent both position and posture of chopper. However, this proves to be a bad choice because of two disadvantages:
a) You cannot take advantage of rotating functions in OpenGL and you have to calculate your own transformation matrix. By solving the cross product and dot product of vectors, I need to solve a linear system and at some boundary cases I need to consider the case of divided-by-zero situations. And it is quite a computation intensive approach for programmer. (Even though these calculation cannot be avoided in any approach, but it can be done by OpenGL and possibly be optimized. )
b) Another disadvantage is that trigonometry function is not linear. Therefore by using vector representation I cannot interpolate the transition by linearly modify vectors unless vector is transformed back to angles. The immediate result is loss of smoothness in flight control. This is why I return to current design which uses angles plus coordinate.
Implementation: I define a Movement structure which contains current coordinate of chopper plus angles of yaw, pitch and roll of chopper from original posture. However, in first-person-view camera it is not simply three rotations because the previous two rotations will have some effects on the third rotation. It is like this:
GLfloat LookAt_x=current_coord.x+ sin(yaw)*cos(pitch);
GLfloat LookAt_y= current_coord.y+sin(pitch)*cos(roll);
GLfloat LookAt_z= current_coord.z+cos(yaw)*cos(pitch);
In the assignment, I didn't solve this problem and I feel happy that I solve it in my project. This is one thing that I learned in this project.
Flight Control Implementation: Flight control becomes much easier after I choose the correct representation of position and posture of chopper because the calculations are all similar and I can use the same function for other repeatedly. For example, I define a function "calcCoord" which calculates the relative coordinate offset with respect to current position of chopper when chopper moves a distance d in forward direction. It is like this:
x_offset=d*cos(pitch)*sin(yaw);
y_offset=d*sin(pitch)*cos(roll);
z_offset=d*cos(pitch)*cos(yaw);
Then all other calculations will be done by simply change the passing parameter.
backward: change distance d to -d
left/right: change yaw to yaw+90, yaw-90 respectively
up/down: change pitch to pitch+90, pitch-90 respectively
yaw/pitch/roll: just modify each respectively without calculation
And even the "head light" is also calculate by this manner.
The gun fire simulation contains three algorithms. They are rocket flight tracking, target tracking, collision detection, explosion simulation algorithms.
Rocket Flight Tracking: The rocket flight is exactly same as chopper flight control system except that each rocket must maintain its own starting data for position, posture and moment. Then these data will be updated whenever display callback is executed.
a) position: The position will be exact the position of chopper when the rocket is fired.
b) posture: Since we allow player to shoot at arbitrary angle by clicking mouse in first-person-view window, the starting posture of rocket will be slightly different from chopper when fired. By calculating the coordinate of mouse click, we can figure out the relative offset of mouse pointer against center of first-person-view which is regarded as current heading of chopper. Even though we understand it is not linear relationship between the offset of coordinate and current yaw, pitch and roll of chopper, user still can get a realistic vision because we only need an initial shooting direction for each rocket. The calculation of initial direction of rocket is like this:
angle_offset_yaw = horizontal_offset/ half_width_window * horizontal_view_angle;
angle_offset_pitch = vertical__offset / half_height_window * vertical_view_angle;
The vertical _view_angle is chosen as perspective view_angle in perspective view function. And
horizontal_view_angle = vertical_view_angle * (height_window/ width_window);
c) moment: It is the timing data acquired in environment by function "GetClickCount".
Then the rocket flight is exactly like the flight of chopper.
Target Tracking: In a real 3D shooting game, the target tracking would be very complicated since a possible AI might be involved. In our project, we choose a simple solution that all targets only move in a constant speed at a constant direction. Therefore to track all positions of targets, we can iterate all target initial position by adding an offset of elapsed_time*moving_speed to the direction of movement.
Collision Detection: Please see the following graph.
Figure 10 Collision detection: The arrow represents vector of direction of rocket flying and the direction from firing position to target.
a) Design Approach: A naive collision detection algorithm is simply calculate the immediate distance between each pair of flying rocket and target. However, the display callback function is executed in a discrete manner so that it is possible that display callback is called at moment when rocket is at position "near point" and "far point". At both position, the distance between rocket and target might be bigger than a pre-defined "explosion distance". But obviously the rocket can hit target between these two points. So, this naive algorithm won't work properly.
b) Implementation Details:
At any moment for any flying rocket with respect to any target, the collision detection algorithms all the same. We need to calculate the collision point and then compare the current position of rocket with it. If we take into consideration of movement of direction of target, the calculation is quite complicated. Therefore, I deliberately make the movement speed of target very small compared with moving speed of rocket so that we can ignore the factor of moving direction of target. However, still we consider the changing position of target at any moment.
The collision happens only when these two conditions are satisfied:
i) At some future moment, the shortest distance between target position to the line of rocket flying is smaller than "explosion distance". (Here as I mentioned above, we assume the target is moving at a very slow speed and we assume between every two consecutive two display callback is called, we won't take direction of target into consideration. )
ii) The current position of flying rocket is equal to or exceeding this collision point.
The algorithm uses dot product to calculate the angle t between vector v1 representing rocket flying direction and vector v2 representing gun firing position to current target position. Therefore the distance d between firing position and collision point is:
d= dot(v1,v2); // v1 must be normalized first.
Then by checking the current flying time, we can know if rocket is equal to or exceeding the explosion point.
Explosion Simulation Algorithms: The explosion is always assumed to happen at the moment when rocket hits the target and at the position where the target is at that moment.
i) Flight Algorithms: Basically explosion is similar to rocket flying that as long as you set up the initial flying data. Every time when display callback is executed, the only thing you need to do is just updating the position.
ii) Initializing Flying Data: For a simple solution, I just use the position of target at the moment of explosion and randomly choosing the yaw, pitch and roll for each fragment of explosion.
iii) Fragments Representation: For a simple solution, I just use random sized, colored cube to represent each fragment as we have some problems in importing objects from Maya.
Design Approach: Originally I write a rather complicated library to encapsulate all OpenGL transformation. The idea is quite straight forward. I view all transformation in changing properties of one object in following ways: position, color, containing volume size, axis-direction. Therefore, programmer only needs to setup all parameters for each object and all transformation code will be done by library. For example, we set a sphere at position of coord(x,y,z), color(r,g,b), volume(w,h,l), direction(vx, vy, vz). Then my library will do the series transformations of rotating to align axis to coincide to direction(vx,vy,vz), set color to color(r,g,b), translate to coord(x,y,z), scale to volume(w,h,l). And all high-level objects can be done universally in this manner. The programmer only need to implement a virtual drawing function of derived class. In the case of sphere, the drawing function is simply "glutSolidSphere". By this design, we can even use "script file" to design program since each object is determined by these parameters and programmer can concentrate on high-level logic design with worrying about OpenGL transformation code which are repeated zillions of times. However, at the later stage of this course, more and more OpenGL features are introduced such as lighting, texture, blending etc. I have to again and again expand my original design of framework and at some points some weird phenomenon are detected, like OpenGL fails to detect depth of objects etc. And I don't have enough time for this project. So, this approach is abandoned later. This is one of my biggest regrets in this course apart from that I should spend more time in some advanced OpenGL techniques.
CheckPointing vs. message-logging and failure vs recovery
(The big truth that I learned)
Hey buddy, can you write down what you learned about recovery in most concise way?
Sure.
1. The biggest mistake is to mingle data consistency with inter-process consistency. The best way is just to put data consistency away for
the moment.
2. If there is no "inconsistent cut" situation, each process can recover from wherever it likes.
3. Checkpointing and message-logging are different completely different approaches and should be treated in separate cases. Checkpointing is
more intuitive and it simply "snapshots" current states and later when failure occurs you can restore from this checkpoint. And from checkpoint
to the moment of failure may not be "replayed" exactly if system is non-deterministic which is almost always the case in real world.
4. Message-logging approach assumes that each non-deterministic event can be identified and recorded and replayed during recovery so that after
failure by replaying this non-deterministic events the system recover to the point just before failure. There are many implementation issues and
assumptions of log storage.
5. Checkpointing recovery basically can be independent as long as there is no inconsistent message exchange or in professional term, there is
no orphan process such that it receives a message from failed process which fails to remember this message before failure occurs. Therefore
after recovery of failed process it will have no sense of that sent message while other process does receive the message. In such case, this
recovery is inconsistent which means the selected checkpoint is not appropriate and further rollback is needed. This is called domino effect.
6. Another way to avoid domino effect is coordinated rollback which relies on global checkpointing so that such global checkpoint guarantees
no inconsistent cut. When failure occurs, all processes roll back from this global checkpoint and eliminate possible domino effect with price
of all processes are affected by single process failure because domino effect is not always possible to happen.
7. There are basically pessimistic and optimistic logging scheme in message logging approach. Pessimistic assumes failure is a frequent event
and always precaution in advance such that all outgoing messages are always recorded before send-out. So, after failure by replaying message
log, all messages are guaranteed to be resent to other processes and no orphan process is possible after recovery. (maybe transiant after failure
before recovery is finished.)
8. Optimistic approach assumes failure is far and few. Therefore by only analysis possible lost message after failure running performance is
put on priority. The detailed algorithm is not read by me yet.
9. That is all I learned in these two days. And the last thing is that inter-process consistency must also be synchronized with data consistency
in the sense that data update operation in database is also possible lost when failure occurs. A local database log must also be maintained
independently or combined with message log. Maybe independent log is better since recovery is using different algorithm for these two issue.
The Virtual Synchrony
The textbook is ,just as professor mentioned in first lecture, crispy and concise. (I forget the word he used.)
1. It mentioned in textbook that every query can be sent to different member of the group by a simple method, say according its rank in the
group. However, if you regard that this is done directly by client program, you are in a trap as I am. It is done by a kind of "load balancing
manager" or "wrapper" or whatever you call it because usually the replication of server and service is transparent to client program. For
example, in CORBA one service is mapped to exactly one registry name even though there are replication of service running behind scene. The
client will always direct its request to exactly one server or agent of server group.
2. Then can you give an exact picture about how virtual synchrony is running?
Yes, let me try.
a) In virtual synchrony, the simplest model is that there is exact single update initiator. Here please note that I create the word "update
initiator" and don't be misled by the word "single". Actually all updates should be applied to every replica so that they keep in a synchronized
state. Then why do we use word "single"? The word "single" here simply means single multicast sender so that causal order of update can be
easily achieved by "fbcast" or "FIFO" order from this single sender. As for all other kinds of queries which don't change state of server,
we don't need multicast the query to each member of group. Instead if we ignore Byzantine error, we can trust single reply from any of replica
by sending the query to it.
b) In virtual synchrony, all updates are done in causal order and all "read" requests are actually in asynchronous mode which means that two
read requests may be directed to two replica running concurrently. And this can achieve highest performance.
c) How to achieve mutual exclusion for conflicted updates?
It will use lock to ensure mutual exclusion. Whenever there is an update, the sender can issue a lock request to all replica and wait for
confirmation reply. After receiving grant for lock request from each replica, sender can issue update request to all replica. The simple
algorithm for lock can be as simple as that the exclusive lock will only be granted when there is no either non-exclusive lock or other exclusive
lock. This is very easy thing in any book about file lock.
d) Can we say update is in lock step mode in virtual synchrony?
Not exactly. In fact, we don't have to wait for all replies from all replica for update. Therefore it is faster than lock step. However, each
replica is updating in exact sequence for all updates. i.e. If there are a series of queries and all the write queries will be executed in
identical sequence in each replica. As for read queries, probably each of them is executed only in one replica. So, in this sense, virtual
synchrony is like lock step in which all replica maintain synchronized state for every moment.
3. How about Byzantine error?
If we want to cope with Byzantine error, we simply increase the number of replies of both read and write queries to t+1 before replying to
client where the total number of replica is 3t+1. Therefore the potential performance gain in virtual synchrony might be lost here because
we probably need to multicast even read requests and wait for more than one reply from replica.
4. How about GMS implementation?
Actually this is not an appropriate question for virtual synchrony since dynamic group view is not only for virtual synchrony. Generally
speaking, cbcast with dynamic group view algorithm can be implemented by vector time in the sense that whenever a failed process is noted
by monitor, a flush operation will be executed to ensure all ongoing message be delivered before new group view is sent to each live process.
However, it is possible that the failed process may fail during multicast. Therefore it is possible that only some of members of group will
receive the last message from failed process. Depending on whether we want to implement "dynamic uniformity" or not, we can either ask all
receivers of these message to resend it or not.
virtual synchrony revisited
After I write down the above, I begin to suspect what I learned from professor. In the slides he presented in class, it seems that virtual
synchrony doesn't require a single multicast sender. In fact, update requests can be sent from any process as long as the "conflicting
update" requests are executed in identical order in all members of replica. It can be done!
Suppose a wrapper or a load balancing manager receives all requests from client. This is indeed the situation in CORBA when you register a
group of replica under a single registry name. This manager program maintains a group view and it can also act as GMS manager. So, in current
group, we have n replica's. The manager receives a request from client and send it to any one of replica according to its rank disregarding
whether it is read request or write request. Here we can apply our load balancing algorithm.
When replica receives a request, it treats the request differently depending on whether it is read or write request. If it is read request, it
simply get its local result and return to sender. (If you want to dig deeper, you may need to apply reading lock to local database first.)
If it is write request, the process may need to issue a lock request in "causal order" by multicast with vector time in message. Until it
receives reply of granting lock from each members of group, it will send out its write request to all by simply fbcast because under protection
of lock, we actually maintain the order of any conflicting updates. Therefore cbcast is no longer needed.
The Connectivity in a Random Graph
This is inspired by a public lecture given by Dr. Hopcroft who was awarded with Turing Reward. And the following is just some wild imagination
from myself.
1. Assume at beginning you only have N number of isolated vertices. Then you begin to randomly pick up pairs of vertices to add an edge between
them. If any two vertices are connected, then the "shortest distance" between them is the number of edges connecting with them in shortest path
between them.( This is a rather clumsy definition. I can simply say the length of shortest path between them. )
2. Assume you pick up vertex with equal chance, how does the number of connected vertices increase?
Let's assume before you pick the first vertex the number of connected vertices is C and the number of total vertices is N. Then the number of
isolated vertices is N-C. The chance to pick up a connected vertex is C/N and the chance to pick up an isolated vertex is (N-C)/N. When you are
going to pick up the second vertex, there is a small chance that you may pick up the exact first vertex. But it won't increase any connectivity.
Therefore, the number of connected vertices becomes
C'= (C + 2)*(N-C)/N *(N-C-1)/N + (C+1)*C/N*(N-C)/N + C*C/N*(C-1)/N + C*1/N
= ( C^3 + (2-N)*C^2 + (N^2 -3N+1)*C -2N ) / N^2
a) Please note that the first term is the case when we pick up two distinct vertices in isolated vertices. The second term is the case when we
pick up one connected vertex and one isolated vertex. The third term is when we pick up two connected vertices. The fourth term is when we pick
up the same vertex as what we pick up in first time.
b) This is a recursive formula and N can be viewed as a constant in a stable system. This implies the connected vertices increase roughly in
cubic power.
c) Also please note that the chances of the four cases in a) are also changing with C. That is to say, the more connected the graph is, the big
chance you will pick up
3. Another simple question is what is the effect when we add an edge into this graph. Does it create any shortcut in system?
a) If one of vertex we pick up is originally isolated vertex, then at least we create a new connection which reduce the distance between these
two vertices from infinity to one.
b) If we pick up two connected vertices, then it is hard to say the edge will create a shortcut. Then how can we know about this?
Assume These two distinctive connected vertices we pick up are a and b. Then let's see what is effect of this edge for any other connected
vertex x. Assume the shortest distances from x to a , b are d1 and d2 respectively, then there are three cases to consider. Obviously the
newly-added edge might change the d1 and d2.
i) If d1==d2 then the shortest distance from x to a or b won't change because d1 + 1 >d1.
ii) If d1==d2+1 then still the shortest distance is not affected because d1+1=d2. Also please note that the case d1+1==d2 is similar.
iii) If d1>d2+1 then we have created a new shortcut, namely d2+1.
iv) In other words, the newly-added edge will force the difference of shortest distance of any vertex to these two newly-connected vertices to
at most one. This sounds like a theorem, but I doubt it has any usefulness. (However, it rings some bell about AVL tree. Maybe it can be used
in such remote-connected issue.)
4. The more interesting question is which vertices are affected with changed shortest distance when we add a new edge. I think there are
countless people are interested to know. For example, in your search engine database when a new link is added what do you need to change for
the rest of data?
the "varying" is not what I expected, but it maybe what you understand
<<<<<<<<here goes
I don't quite understand how the sample program calculate the "reflected image" because the code transform the "normal" by "gl_Normal" which is transpose of inverse of "gl_ModelViewMatrix". And of course they use "gl_ModelViewMatrix" to transform vertex. (See example "polkadot3d") I think we should use "gl_ModelViewMatrix" for transformation for both vertex and normal. And when I did such experiment, it looks exactly same! Why is that? See the attachment of my experiment. I changed the transformation in "vert" part. Enclosed the shading program:
//vertex shader
// This is the Vertex Shader for three dimensional polka dots. // // author(s): Joshua Doss // // Copyright (C) 2002-2006 3Dlabs Inc. Ltd. //Create uniform variables for lighting to allow user interaction uniform float SpecularContribution; uniform vec3 LightPosition; uniform float MaxCounter; varying vec3 MCPosition; varying float LightIntensity; varying float counter; void main(void) { float diffusecontribution = 1.0 - SpecularContribution; // compute the vertex position in eye coordinates vec3 ecPosition = vec3(gl_ModelViewMatrix * gl_Vertex); ////////////////////////////////////////////////////////////////////////// //This is where I don't understand: why do we use "gl_NormalMatrix" which is the //transpose of inverse of "gl_ModelViewMatrix" instead "gl_ModelViewMatrix" itself? // compute the transformed normal //vec3 tnorm = normalize(gl_NormalMatrix * gl_Normal); /////////////////////////////////////////////////////////////////////////////// //And following is what experiment I do for using "gl_ModelViewMatrix" //here because the compiler seems to not support "mat3(ma4)" constructor, //I have to split the mat4 into three vec3 and construct mat3. vec3 v0=vec3(gl_ModelViewMatrix[0]); vec3 v1=vec3(gl_ModelViewMatrix[1]); vec3 v2=vec3(gl_ModelViewMatrix[2]); mat3 myMatrix=mat3(v0, v1, v2); vec3 tnorm = vec3(myMatrix * gl_Normal); tnorm=normalize(tnorm); ////////////////////////////////////////////////////////////////// //now tnorm is actually norm after gl_ModelViewMatrix transformation //and amazingly the effect is all the same. ////////////////////////////////////////////////////////////////// // compute a vector from the model to the light position vec3 lightVec = normalize(LightPosition - ecPosition); // compute the reflection vector vec3 reflectVec = reflect(-lightVec, tnorm); // compute a unit vector in direction of viewing position vec3 viewVec = normalize(-ecPosition); // calculate amount of diffuse light based on normal and light angle float diffuse = max(dot(lightVec, tnorm), 0.0); float spec = 0.0; // if there is diffuse lighting, calculate specular if(diffuse > 0.0) { spec = max(dot(reflectVec, viewVec), 0.0); spec = pow(spec, 16.0); } // add up the light sources, since this is a varying (global) it will pass to frag shader LightIntensity = diffusecontribution * diffuse * 1.5 + SpecularContribution * spec; //LightIntensity = 0.56; counter=counter+1.0; if (counter>(MaxCounter-1.0)) { counter=1.0; } // the varying variable MCPosition will be used by the fragment shader to determine where // in model space the current pixel is //counter=counter+1; MCPosition = vec3 (gl_Vertex); // send vertex information gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; }
//fragment shader
// // Fragment shader for 3 dimensional polka dot shader. // // Author: Joshua Doss // // Copyright (C) 2002-2006 3Dlabs Inc. Ltd. // // See 3Dlabs-License.txt for license information // varying float LightIntensity; varying vec3 MCPosition; varying float counter; //Create uniform variables so dots can be spaced and scaled by user uniform vec3 Spacing; //uniform float DotSize; uniform float MaxCounter; //Create colors as uniform variables so they can be easily changed uniform vec3 ModelColor, PolkaDotColor; void main(void) { float insidesphere, sphereradius, scaledpointlength; vec3 scaledpoint, finalcolor; // Scale the coordinate system // The following line of code is not yet implemented in current drivers: // mcpos = mod(Spacing, MCposition); // We will use a workaround found below for now ///////////////////////////////////////////////////////// //scaledpoint = MCPosition - (Spacing * floor(MCPosition/Spacing)); // Bring the scaledpoint vector into the center of the scaled coordinate system //scaledpoint = scaledpoint - Spacing/2.0; ///////////////////////////////////////////////////////// // Find the length of the scaledpoint vector and compare it to the dotsize //scaledpointlength = length(scaledpoint); //counter = counter+1; //insidesphere = step(scaledpointlength,DotSize); if (counter> 0.0) insidesphere=0.0; else insidesphere=1.0; //insidesphere=counter/MaxCounter; //insidesphere=0.5; //LightIntensity =0.1; //myVar=1.0; // Determine final output color before lighting finalcolor = vec3(mix(ModelColor, PolkaDotColor, insidesphere)); // Output final color and factor in lighting gl_FragColor = clamp(vec4(finalcolor * LightIntensity, 1.0), 0.0, 1.0); }
>>>>>>>>Here comes...
correct transformation for normals is transpose of inverse of gl_modelviewmatrix. checkout appendix of OpenGL red book for explanation. maybe there is no scaling in your experiment. so effects might look same. i should be back in lab on monday. i don't have access to various versions of visual studio in my home. so cannot answer those questions. the code given should run fine with vs7 and vs7.1 a varying variable is something which you calculate in vertex shader and wants interpolated results in fragment shader. consider triangle: there are 3 vertices: so 3 times vertex shader is called. fragment shader is called many times depending on pixels in triangle and each pixel calculations is done by fragment shader which might need interpolated results from 3 vertices. fragment shader operates on pixel only after results from all vertices of triangle (or other primitive) is available. you cannot use constant instead of varying because interpolation is different for different pixels of triangle. varying automatically takes care of this.
<<<<<<<<<<<<<<<<Here goes
Thank you for your explanation. However, still I don't get how interpolation in fragment can be possible with "varying" variables which vertex program assigns values for each vertex. There must be some misunderstanding in my view of pipeline and shading language. So, the following is my understanding and please verify for me. 1. All vertex, primitives, pixels operations are done in pipeline, stream-line manner which means they are more or less in parallel: Vertex-Transformation-pipeline: vert[0], vert[1],vert[2]...vert[n] triangle-assembly-pipeline: tri[0],tri[1]...tri[m] rasterization-pipeline: pixel[0], pixel[1], pixel[2]...pixel[p] Assuming the operation orders are optimized such that vert[0],vert[1], vert[2] corresponds tri[0] so that triangle-assembly-pipeline doesn't have to wait. So does for rasterization pipeline or scan-conversion. Then as you said the same vertex program will be executed by three vertices, namely vert[0], vert[1], vert[2], and multiple fragments or pixels will be scan-converted. And the scan convertion cannot start until all vertices of triangle are available. Thus the "varying" variable can only store the assignment done by the last vertex or vert[2] in this case. So, how can multiple pixels or fragments interpolate by using the same "varying" variable which is shared by three vertices? 2. A second guess occurs to me. Is it true that those interpolation will be done "automatically" by fragment shader itself if we define those "interpolating functions", like "step", "mix"? i.e. If I call step(edge, value) then shader itself will do internally to interpolate the "value" among all fragments? This is a bit crazy and I just don't believe it.
>>>>>>>>>>>>>>>>here comes..
Maybe similarity between C++ and OpenGL shading language is causing confusion. Many keywords such as varying may have expanded or different meaning than that in C++. varying (in OpenGL 2, not in C++) means interpolation will be done automatically by hardware between values of vertices. this (varying) variable value will be automatically different for different fragments, as per calculation by hardware. I already explained this in my previous explanation. It has nothing to do with functions such as step etc. This interpolated value is necessary for effects such as per pixel lighting where we need interpolated normals etc. Just like color was automatically interpolated (in smooth shading) for fragment color in previous versions of OpenGL (say OpenGL 1.2), so does varying variables in shading language.
<<<<<<<<<<<<<<<<here goes...
Thank you for your explanation. Maybe I should ask a more specific question like this. The following is a simple shading program. vertex program: varying vec4 myposition; void main(void) { myposition=gl_Vertex; gl_Position=gl_ModelViewProjectionMatrix * gl_Vertex ; } fragment program: varying vec4 myposition; void main() { gl_FragColor = myposition; } Suppose we have a model of one big triangle vert1, vert2, vert3 and how does hardware interpolate values in between different pixels? Or more specifically after running three times of vertex program what happens with "varying" variable myposition? And for all n pixels what value would each pixel pick up?
<<<<<<<<<here goes...
Sorry again.Just after sending out the previous question I think I might understand what you mean. When doing scan conversion, for all those "varying" variables between three vertices of one triangle, the hardware will do linear interpolation. So, taking the example given in my previous mail, suppose the triangle finally is transformed to be display coordinate of v1(0,0) v2(10,10) v3(20,5). Then v1 between v2 there are 10 rows and varying "mcpostion" will be linearly interpolated by 10. Similarly between v1 and v3, there are there are 5 rows, the "mcposition" will be linearly interpolated by 5. And within each scanning line, the "mcposition" will also be linearly interpolated "horizontally". In case of segment pass v2, there will be 10 column and the scan line will linearly interpolate "mcposition" by 10. >>>>>>>>>>>here comes...
yes. interpolation between vertices occur. though formula is more complicated. see OpenGL specification for exact interpolation formula.
Sometimes my brain is as sticky as glue...
The following code is describing the algorithm of LZW compression.
set w = NIL loop read a character K if wK exists in the dictionary w = wK else output the code for w add wK to the string table w = K endloop
I like to interpret it as an algorithm of "discovery" which is so intuitive in one man's vision of procedure of discovering his surrounding
environment. Whenever you see something new, you record it down in your memory in preparation for next "recognition". And whenever you see
something the first thing you are trying is to find it out in your memory if you saw that before. So, if this is the way of establishing of
knowledge, then what is advantage of this architecture of information? In other words, what is "good" of doing this? One obvious answer is
to make optimum usage of memory so that we can memorize more.
Finally I give up because I don't understand the significance of my way.
#include <set>
#include <cstdio>
#include <deque>
using namespace std;
struct Code
{
char* code;
int index;
static int counter;
void display();
};
int Code::counter=0;
struct CodeComp
{
bool operator()(const Code& first, const Code& second)
{
return strcmp(first.code, second.code)<0;
}
bool operator()(const Code& first, const Code& second) const
{
return strcmp(first.code, second.code)<0;
}
};
typedef set<Code, CodeComp> CodeSet;
typedef deque<Code> CodeQueue;
class Dictionary
{
private:
CodeSet codeSet;
Code dummy;
CodeQueue codeQueue;
CodeSet::iterator it;
public:
void readFile(char* fileName);
void readString(char* sentence);
void printCode();
void printQueue();
};
int main()
{
Dictionary dictionary;
dictionary.readString("\\WED\\WE\\WEE\\WEB");
dictionary.printCode();
dictionary.printQueue();
return 0;
}
void Dictionary::printQueue()
{
printf("the code queue is:\n");
for (int i=0; i<codeQueue.size(); i++)
{
codeQueue[i].display();
}
}
void Code::display()
{
printf("[index=%d;code=%s]\n", index, code);
}
void Dictionary::printCode()
{
printf("the code table is:\n");
for (it=codeSet.begin(); it!=codeSet.end(); it++)
{
it->display();
}
}
void Dictionary::readString(char* sentence)
{
int len=strlen(sentence);
int tempLen, i, j;
Code result;
char* buf=new char[len+1];
char* temp=new char[len+1];
codeQueue.clear();
for (i=0; i<len; i++)
{
buf[i]=sentence[i];
buf[i+1]='\0';
//index=-1;
//printf("now let's see code\n");
//printCode();
//printf("now let's see queue\n");
//printQueue();
result.code=NULL;
for (j=i; j>=0; j--)
{
dummy.code=buf+j;
it=codeSet.find(dummy);
if (it==codeSet.end())
{
tempLen=strlen(buf+j);
dummy.code=new char[tempLen+1];
strncpy(dummy.code, buf+j, tempLen+1);
dummy.index=dummy.counter++;
it=codeSet.insert(dummy).first;
break;
}
else
{
result=*it;
}
}
if (result.code==NULL)
{
result=*it;
}
tempLen=strlen(result.code);
while (!codeQueue.empty())
{
if (tempLen>strlen(codeQueue.back().code))
{
tempLen-=strlen(codeQueue.back().code);
codeQueue.pop_back();
}
else
{
break;
}
}
codeQueue.push_back(result);
}
}
void Dictionary::readFile(char* fileName)
{
}
//The running result is like this:
the code table is:
[index=10;code=B]
[index=2;code=D]
[index=3;code=DW]
[index=1;code=E]
[index=7;code=EE]
[index=8;code=EEW]
[index=9;code=EEWE]
[index=5;code=EW]
[index=6;code=EWE]
[index=0;code=W]
[index=4;code=WE]
the code queue is:
[index=0;code=W]
[index=1;code=E]
[index=2;code=D]
[index=0;code=W]
[index=1;code=E]
[index=4;code=WE]
[index=6;code=EWE]
[index=10;code=B]
Press any key to continue
the simple binary sending...
The title is not exact because I cannot find a proper term to describe it. In cluster the communication is done in parallel for
point-to-point communication. So, when you want all data be sent to one node for processing, it is better to do it in a fashion of binary-
tree-like. The following code is a demo and test for my personal usage. It is a bit improvement compared with my naive implementation last
winter.
#include <stdio.h>
#include <stdlib.h>
struct BinaryDesc
{
int total;
int *array;
int myIndex;
};
int roundCount;
void recvFrom(int me, int from)
{
printf("me%d will receive from %d\n", me, from);
}
void sendTo(int me, int to)
{
printf("me%d will send to %d\n", me, to);
}
void doBinary(const BinaryDesc* desc)
{
int i;
BinaryDesc myDesc;
if (desc->total==1)
{
printf("finished and last one is %d\n", desc->array[0]);
return;
}
else
{
printf("after %d round\n", ++roundCount);
}
if (desc->myIndex%2==0)
{
//i may have to receive
if (desc->myIndex+1<desc->total)
{
recvFrom(desc->array[desc->myIndex], desc->array[desc->myIndex+1]);
}
else
{
printf("me%d is idle in this round\n", desc->array[desc->myIndex]);
}
//in any case I will do this
myDesc.total=(desc->total+1)/2;
myDesc.array=new int[myDesc.total];
for (i=0; i<myDesc.total; i++)
{
myDesc.array[i]=desc->array[2*i];
}
myDesc.myIndex=desc->myIndex/2;
doBinary(&myDesc);
delete[]myDesc.array;
}
else
{
//i always have to send
sendTo(desc->array[desc->myIndex], desc->array[desc->myIndex-1]);
//then I quit;
}
}
int main()
{
BinaryDesc desc;
int i, last=0;
desc.total=12;
desc.array=new int[desc.total];
printf("the array is:\n");
for (i=0; i<desc.total; i++)
{
desc.array[i]=last+rand()%3+1;
last=desc.array[i];
printf("[%d]", desc.array[i]);
}
printf("\nand now let's do one by one\n");
for (i=0; i<desc.total; i++)
{
printf("\n******************\nfor index =%d\n", i);
roundCount=0;
desc.myIndex=i;
doBinary(&desc);
}
delete[]desc.array;
return 0;
}
The running outcome is like this:
the array is: [3][6][8][10][13][15][16][17][19][22][25][28] and now let's do one by one ****************** for index =0 after 1 round me3 will receive from 6 after 2 round me3 will receive from 8 after 3 round me3 will receive from 13 after 4 round me3 will receive from 19 finished and last one is 3 ****************** for index =1 after 1 round me6 will send to 3 ****************** for index =2 after 1 round me8 will receive from 10 after 2 round me8 will send to 3 ****************** for index =3 after 1 round me10 will send to 8 ****************** for index =4 after 1 round me13 will receive from 15 after 2 round me13 will receive from 16 after 3 round me13 will send to 3 ****************** for index =5 after 1 round me15 will send to 13 ****************** for index =6 after 1 round me16 will receive from 17 after 2 round me16 will send to 13 ****************** for index =7 after 1 round me17 will send to 16 ****************** for index =8 after 1 round me19 will receive from 22 after 2 round me19 will receive from 25 after 3 round me19 is idle in this round after 4 round me19 will send to 3 ****************** for index =9 after 1 round me22 will send to 19 ****************** for index =10 after 1 round me25 will receive from 28 after 2 round me25 will send to 19 ****************** for index =11 after 1 round me28 will send to 25 Press any key to continue Do you know or do you not know? "Nick, let me ask you about something!"
"Yes?"
"What is computation?"
"Hmm.... computation is such a procedure that we compute some elements and assemble their results in a certain manner to present."
"What IS computation?"
"Hmm..."
"What EXACTLY are you doing for computation?"
"Hmm... I am trying to sort..."
"Yes!"
"...out...the problem...Am I right?"
"Yes!"
"Is my answer correct about computation?"
"Yes! Oh, no!"
"Yes or no?"
"You are sorting."
"What am I doing?"
"Sorting!"
"Hmm...do you mean all computation is just a kind of sorting?"
"Yes!"
"Hmm...any problem? any algorithm?"
"Yes!"
"Isn't it too generic?"
"Isn't it good?"
"Hmm...it depends, doesn't it?"
"Depends on what?"
"It depends on how you define sorting."
"Sorting is such a procedure that you compare all elements with a specific order and then reassembly data to present it in that order.
Optimization questions usually requires you to give the top element in result like the maximum or minimum one or some of their property like
the size of that element etc. Decision problem simply asks if the element has such a property."
"Hmm..."
"..."
"How do you sort?"
"You compare elements."
"In pairs?"
"That is best you can expect. Most P problems require you to only look into binary relations. In some cases you may need to look into each
triples or more. As long as you don't have to look into n-ary relations it is P problem."
"Is that all?"
"I think so."
"We have n*(n-1) pairs for n elements and do you suggest that for all P problems we can only have O(n*n) algorithm?"
"Certainly not! We are talking about a "total order" here which enjoys transitive law such that (a<b&b<c)=>a<c. And all sorting algorithms
take advantage of this property to save logn from n and that is why the best of sorting algorithm can only do O(nlogn)."
"Saving logn from n? Can you be more specific?"
"No!"
"Why not?"
"Hmm..."
"Do you know or do you not know?"
"Hmm..."
"How about those algorithm such that they only need O(n)?"
"What are these algorithm doing?"
"Hmm..."
"They are searching!"
"Hmm..."
"For searching problem you only need to compare what you have with what you encounter in linear probing."
"Then it is SEARCHING not SORTING, isn't it?"
"It is still sorting, only that your comparison requirement becomes equal or not-equal. Therefore all n element are sorted in two
super-elements. One is equal, the other is not equal."
"Hmm...then how about those algorithms with O(logn)? Are we doing sorting with binary search?"
"This is because you are taking advantage of numbers!"
"Numbers?"
"Yes! What is number?"
"Hmm..."
"Number system can be MAPPED by any other "total order" like an image and we already pre-calculate some properties of number system. By
taking advantage of this, we achieve O(logn)."
"Can you be more specific?"
"What do you know about numbers?"
"Hmm..."
"Do you know the precedence of each number? Then after mapping each element into number which is a special total order, you will take
advantage that you know the precedence of each number and can even random access element like RAM."
"Hmm...can you be more specific?"
"..."
"I am asking a question!"
"Zzzz..."
question:
Given 2n+1 number, all except one are in pairs of same value. e.g. [1,2,4,6,2,1,4] and 3 is your input. 6 is the number you need to find out.
my answer:
void findSingle(int array[], int arraySize) { set<int> mySet; pair<set<int>::iterator, bool> result; for (int i=0; i<arraySize; i++) { result=mySet.insert(array[i]); if (!result.second) { mySet.erase(result.first); } } printf("it is %d\n", *mySet.begin()); }
time complexity:
O(log1+log2 +log3 + ...logn)
There should be a limit for this series, but I don't know.
/////////////////////////////////
The above is a common solution and there is a magic solution. Just "xor" all of them and the remainder is the number. I learn this from my
friend.
////////////////////////////////
question: loop in link list.
How to find loop in link list? I learned this magic algorithm from my friends.
#include <stdio.h> #include <stdlib.h> struct Link { Link* next; int rank; int value; }; Link* init(int length) { Link* result=new Link; Link* ptr=result; result->next=NULL; result->rank=0; result->value=rand()%1000; for (int i=0; i<length; i++) { ptr->next=new Link; ptr->next->rank=ptr->rank+1; ptr->next->next=NULL; ptr->next->value=rand()%1000; ptr=ptr->next; } return result; } void make_loop(Link* root, int crossStep) { Link* cross=root; Link* ptr; if (root==NULL) { printf("empty link list\n"); return; } for (int i=0; i<crossStep; i++) { cross=cross->next; if (cross==NULL) { printf("link list is not long enough\n"); return; } } ptr=cross; while (ptr->next!=NULL) { ptr=ptr->next; } ptr->next=cross; } void print_link(Link* root) { Link*ptr=root; printf("begin print link list\n"); while (ptr!=NULL) { printf("rank[%d] value[%d]\n", ptr->rank, ptr->value); ptr=ptr->next; } } bool find_loop(Link* root) { Link* fast=root, *slow=root; bool noLoop=false; int counter=0; //check condition if root==NULL while (slow!=NULL) { if (fast->next!=NULL) { fast=fast->next; } else { return false;//no loop; } if (fast->next!=NULL) { fast=fast->next; } else { return false;//no loop; } if (slow==fast) { printf("first counter=%d\n", counter); return true; } if (slow->next!=NULL) { slow=slow->next; if (slow==fast) { printf("second counter=%d\n", counter); return true; } } counter++; } return false; } int main() { Link* root; root=init(20); print_link(root); make_loop(root, 15); //print_link(root); if (find_loop(root)) { printf("there is a loop\n"); } else { printf("there is no loop\n"); } return 0; }
question: how to test if a number is power of 2?
I learn this from my friend: !(number & (number-1))
question: how to count how many bits in an integer?
int countBits(unsigned int number) { int result=0; while (number) { number=(number-1)&number; result++; } return result; }
question: how to swap two integer without an extra variable for temporary storage?
void swapInteger(int& x, int& y)
{
x=x^y; //x = x xor y
y=x^y; //y = (x xor y) xor y ==> x
x=x^y; //x = (x xor y) xor x ==> y
}
W gives a even simpler solution!
x=x+y;
y=x-y;
x=x-y;
Remember: The determinant of reflection is -1 and the determinant of rotation is 1.
Here is a little summation of various "type" cast. Please note that complex type should rather not be called type-cast, but instead
type-compatible assignment. You see, it is C++ specification that the derived class type is "compatible" with base class type. There is
actually not "type-conversion", therefore no need for type-cast.
|
static_cast
|
dynamic_cast
|
reinterpret_cast
|
const_cast
|
C cast
|
Primitive type
|
Y
|
N
|
N
|
N
|
Y
|
Primitive pointer
type
|
N
|
N
|
Y
|
Y/N(cannot cast to other
type than itself w/o const)
|
Y
|
Complex type (no inheritance)
|
N
|
N
|
N
|
N
|
N
|
Complex pointer type (no inheritance)
|
N
|
N
|
Y
|
N
|
Y
|
Runtime check
(Complex pointer type)
|
N
|
Y
|
N
|
N
|
N
|
Remove const and
volatile for pointer type
|
N
|
N
|
N
|
Y
|
Y
|
What is new?
There are several things I want to make a note.
1. The so-called singleton is made through "hiding" its constructor as "protected" so that no instance can
call its constructor because of access restriction. i.e. A a; Instance "a" is trying to access A::A(); which
is declared as protected.
2. Therefore we hide the constructor, and then we need to give a restricted method to construct an instance
by declaring a method, like "createInstance", to return a new instance. In the case of singleton, we don't
generate more than one instance.
3. In principle, a derived class inherents all protected member and method from its base class by public
inheritance. If the base class declare its constructor as protected, it will be inherited just like anything
else. However, because of syntax a= new A; will automatically call its constructor equivalently like
i) A::operator new(sizeof(A); //if A's operator new is overided, otherwise use "global new operator".
ii)A:A()
And this is treated as if a.A(); Therefore you are NOT able to call this EVEN in an inherited class member
method though as inherited class you inherits all protected member from your base class. This is the reason
why you cannot call base class constructor in inherited class member function.
4. Overriding operator new in a class doesn't change the situation in a=new A; because as soon you type "A"
after "new" you already instruct compiler to call A's constructor after A's "new" operator. So, unless you
change to specifically to a=(A*)A::operator new(sizeof(A)); However, this is more like a meaningless question
because you and I cannot image what is purpose of doing this. Just for hacking code?
5. There is one thing I don't understand at beginning. When I override operator new as following, I always get a warning
from compiler saying that no matching "delete" operator is found and there is possible memory leaking etc. even though
I have already override my delete operator as A::operator delete(void* ptr);
Later I found out that. The thing is that the compiler will try to find a delete operator with exact same parameter as
the override "new operator". i.e. void* A::operator new(size_t size, sometype param); must be matched with
void A::operator delete(void* ptr, sometype param);
And by default new takes size_t as first parameter, delete takes void* as first parameter. From then on, the parameter type
and number must match. This is so-called the matched new-delete.
#include <stdio.h> #include <stdlib.h> #include <string.h> class A; class A { protected: static A* a; static bool instance(); void func(){printf("I am func of A\n");} int aInt; void* operator new(size_t size, char initChar); void operator delete(void* ptr); A(){;} public: }; void* A::operator new(size_t size, char initChar) { if (a!=NULL) { a=(A*)malloc(size); memset(a, size, initChar); } return a; } void A::operator delete(void* ptr) { free(ptr); ptr=NULL; } A* A::a=NULL; bool A::instance() { if (a==NULL) { a=(A*)(new(0) A); } return true; } class B: public A { public: void instance() { //this works but we cannot dynamically allocate memory //as it is only a local temporary variable. a=&(A::A()); //this works because we just work around with A::A() or A's constructor //instead we only want to use its "member method". a=(A*)A::operator new(sizeof(A), 0) ; A::func(); printf("now I am inside B and access A's member, like its constructor.\n"); //A* ptr=new A;//this will give error aInt=0; } void deInstance() { delete a; } }; int main() { B b; //A* a=(A*)A::operator new(sizeof(A),0) ; b.instance(); b.deInstance(); return 0; }