Jump to content
SubSpace Forum Network

Xog

»VIP
  • Posts

    974
  • Joined

  • Last visited

Everything posted by Xog

  1. Oh Karma, how I loathe thee. This Koala sure is Karmic! *nudge nudge* http://img130.imageshack.us/img130/3622/screenshotqt.png AFK digging grave for my hard drive.
  2. Okay, I've asked Gen2ly to make some changes so that I can run it in the new wine version (1.1.33 aka 1.2) and he told me to make a post on the minegoboom forum. No luck there, nobody's responding. Anyone want to spend some help doing this? I'll be sure to come back here if I ever find a fix -_- -Xog
  3. I see your picture of a neglected computer, possibly by someone that had no idea that it could get buildup inside of the computer case, and possibly barely had any idea of how to use a computer.. and raise you a painful reminder of how ignorant, irresponsible, and unclean some people can REALLY be. Please keep in mind this picture is completely real, this is somebody's keyboard, and they let it get like this. http://widescreenwallpapers.org/klodian/Viral-Emails/Desktop-Computer-Viral-Email-Campaign-1.jpg
  4. HOLY SHIT LOOOOOOOOL
  5. Last night I decided to disassemble my laptop for the sake of arguing. I've never done this before, but I followed the guide specifically for my model, on a webpage using my iPhone. This computer is from 2005-6, and I use it just about every day. I smoke probably 10 cigarettes a day in front of it, while it's on my lap. When I looked inside, the only thing that had the TINIEST bit of a problem was the fan. It had a LITTLE dust on it, and i've NEVER had this thing serviced or cleaned out before. I was actually quite surprised. But I decided to clean the little bit of dust out with my can of air (such silly things we produce) and expected dust to blow out from underneath the fan.. nope! it was just on the fan blades. Needless to say, I've easily smoked over 1,000 cigarettes directly in front of my laptop, and it had no effect from the smoke on it. Just normal dust. Also note that this laptop has a Geforce Go 7300, Pentium dual-core T2080(1MB Cache/1.73GHz/533MHz FSB) Re assembling my laptop was a challenge in itself though, I literally had to take everything apart in order to get to the fan. http://support.dell.com/support/edocs/systems/ins6400/en/sm/fans.htm#wp1000550
  6. ^this. Another factor: Dust. Are they going to start voiding warranties for users that don't vaccuum and clean up their house? Seriously. LOL!
  7. http://forums.minegoboom.com/viewtopic.php?t=8603
  8. Did u redownload and replace the file after you upgraded (upgrading deletes the patched file) If you did and its still not working i probably need to recompile the DLL... Actually, when I installed the new wine, i completely removed the 2 programs i used in wine and uninstalled wine itself, then installed 1.2, followed the instructions for installing / running continuum. i installed it, put in the terminal entries, and when i run it with wine, it just says "Opening Continuum.exe" on my bottom panel for about 10-15 seconds, and then closes. I checked my System Log Viewer and went to Messages and got this on the bottom line: Nov 22 18:57:28 logan-laptop kernel: [ 5524.802361] wine[2825]: segfault at 0 ip (null) sp bfa4f01c error 4
  9. http://wine.getcontinuum.com/ 0. If you do not already have wine installed on your system, install wine in the standard way on your system. 1. Download standard Continuum installer - 4.7 MB 2. Run the installer downloaded above (default options are fine) on wine. 3. Before you run Continuum, execute the following commands: wget http://subspace2.net/kernel32.dll.so -O /tmp/kernel32.dll.so sudo mv /usr/lib/wine/kernel32.dll.so /usr/lib/wine/kernel32.dll.so.old sudo cp /tmp/kernel32.dll.so /usr/lib/wine/kernel32.dll.so Alternate 3. Before you run Continuum, backup the original kernel library and download the new patched version with the following commands: sudo mv /usr/lib/wine/kernel32.dll.so{,.bak} sudo wget http://subspace2.net/kernel32.dll.so -O /usr/lib/wine/kernel32.dll.so 4. Run Continuum. 5. PLAY! I just updated to the new wine version (1.1.33, also called 1.2) and it won't run. Can someone give a fix? I'm gonna ask in the ubuntuforums.org thread for Gen2ly to get a fix, if he responds there before this place, i'll post the fix here.
  10. http://wine.getcontinuum.com/ 0. If you do not already have wine installed on your system, install wine in the standard way on your system. 1. Download standard Continuum installer - 4.7 MB 2. Run the installer downloaded above (default options are fine) on wine. 3. Before you run Continuum, execute the following commands: wget http://subspace2.net/kernel32.dll.so -O /tmp/kernel32.dll.so sudo mv /usr/lib/wine/kernel32.dll.so /usr/lib/wine/kernel32.dll.so.old sudo cp /tmp/kernel32.dll.so /usr/lib/wine/kernel32.dll.so Alternate 3. Before you run Continuum, backup the original kernel library and download the new patched version with the following commands: sudo mv /usr/lib/wine/kernel32.dll.so{,.bak} sudo wget http://subspace2.net/kernel32.dll.so -O /usr/lib/wine/kernel32.dll.so 4. Run Continuum. 5. PLAY!
  11. The fix for wine 1.2!
  12. Xog

    Haha! I totally know where you're coming from.
  13. A single neuron (i.e. processing unit) takes it total input In and produces an output activation Out. I shall take this to be the sigmoid function Out = 1.0/(1.0 + exp(-In)); /* Out = Sigmoid(In) */ though other activation functions are often used (e.g. linear or hyperbolic tangent). This has the effect of squashing the infinite range of In into the range 0 to 1. It also has the convenient property that its derivative takes the particularly simple form Sigmoid_Derivative = Sigmoid * (1.0 - Sigmoid) ; Typically, the input In into a given neuron will be the weighted sum of output activations feeding in from a number of other neurons. It is convenient to think of the activations flowing through layers of neurons. So, if there are NumUnits1 neurons in layer 1, the total activation flowing into our layer 2 neuron is just the sum over Layer1Out*Weight, where Weight is the strength/weight of the connection between unit i in layer 1 and our unit in layer 2. Each neuron will also have a bias, or resting state, that is added to the sum of inputs, and it is convenient to call this weight[0]. We can then write Layer2In = Weight[0] ; /* start with the bias */ for( i = 1 ; i <= NumUnits1 ; i++ ) { /* i loop over layer 1 units */ Layer2In += Layer1Out * Weight ; /* add in weighted contributions from layer 1 */ } Layer2Out = 1.0/(1.0 + exp(-Layer2In)) ; /* compute sigmoid to give activation */ Normally layer 2 will have many units as well, so it is appropriate to write the weights between unit i in layer 1 and unit j in layer 2 as an array Weight[j]. Thus to get the output of unit j in layer 2 we have Layer2In[j] = Weight[0][j] ; for( i = 1 ; i <= NumUnits1 ; i++ ) { Layer2In[j] += Layer1Out * Weight[j] ; } Layer2Out[j] = 1.0/(1.0 + exp(-Layer2In[j])) ; Remember that in C the array indices start from zero, not one, so we would declare our variables as double Layer1Out[NumUnits1+1] ; double Layer2In[NumUnits2+1] ; double Layer2Out[NumUnits2+1] ; double Weight[NumUnits1+1][NumUnits2+1] ; (or, more likely, declare pointers and use calloc or malloc to allocate the memory). Naturally, we need another loop to get all the layer 2 outputs for( j = 1 ; j <= NumUnits2 ; j++ ) { Layer2In[j] = Weight[0][j] ; for( i = 1 ; i <= NumUnits1 ; i++ ) { Layer2In[j] += Layer1Out * Weight[j] ; } Layer2Out[j] = 1.0/(1.0 + exp(-Layer2In[j])) ; } Three layer networks are necessary and sufficient for most purposes, so our layer 2 outputs feed into a third layer in the same way as above for( j = 1 ; j <= NumUnits2 ; j++ ) { /* j loop computes layer 2 activations */ Layer2In[j] = Weight12[0][j] ; for( i = 1 ; i <= NumUnits1 ; i++ ) { Layer2In[j] += Layer1Out * Weight12[j] ; } Layer2Out[j] = 1.0/(1.0 + exp(-Layer2In[j])) ; } for( k = 1 ; k <= NumUnits3 ; k++ ) { /* k loop computes layer 3 activations */ Layer3In[k] = Weight23[0][k] ; for( j = 1 ; j <= NumUnits2 ; j++ ) { Layer3In[k] += Layer2Out[j] * Weight23[j][k] ; } Layer3Out[k] = 1.0/(1.0 + exp(-Layer3In[k])) ; } The code can start to become confusing at this point - I find that keeping a separate index i, j, k for each layer helps, as does an intuitive notation for distinguishing between the different layers of weights Weight12 and Weight23. For obvious reasons, for three layer networks, it is traditional to call layer 1 the Input layer, layer 2 the Hidden layer, and layer 3 the Output layer. Our network thus takes on the familiar form that we shall use for the rest of this document http://www.cs.bham.ac.uk/~jxb/NN/nn.gif Also, to save getting all the In's and Out's confused, we can write LayerNIn as SumN. Our code can thus be written for( j = 1 ; j <= NumHidden ; j++ ) { /* j loop computes hidden unit activations */ SumH[j] = WeightIH[0][j] ; for( i = 1 ; i <= NumInput ; i++ ) { SumH[j] += Input * WeightIH[j] ; } Hidden[j] = 1.0/(1.0 + exp(-SumH[j])) ; } for( k = 1 ; k <= NumOutput ; k++ ) { /* k loop computes output unit activations */ SumO[k] = WeightHO[0][k] ; for( j = 1 ; j <= NumHidden ; j++ ) { SumO[k] += Hidden[j] * WeightHO[j][k] ; } Output[k] = 1.0/(1.0 + exp(-SumO[k])) ; } Generally we will have a whole set of NumPattern training patterns, i.e. pairs of input and target output vectors, Input[p] , Target[p][k] labelled by the index p. The network learns by minimizing some measure of the error of the network's actual outputs compared with the target outputs. For example, the sum squared error over all output units k and all training patterns p will be given by Error = 0.0 ; for( p = 1 ; p <= NumPattern ; p++ ) { for( k = 1 ; k <= NumOutput ; k++ ) { Error += 0.5 * (Target[p][k] - Output[p][k]) * (Target[p][k] - Output[p][k]) ; } } (The factor of 0.5 is conventionally included to simplify the algebra in deriving the learning algorithm.) If we insert the above code for computing the network outputs into the p loop of this, we end up with Error = 0.0 ; for( p = 1 ; p <= NumPattern ; p++ ) { /* p loop over training patterns */ for( j = 1 ; j <= NumHidden ; j++ ) { /* j loop over hidden units */ SumH[p][j] = WeightIH[0][j] ; for( i = 1 ; i <= NumInput ; i++ ) { SumH[p][j] += Input[p] * WeightIH[j] ; } Hidden[p][j] = 1.0/(1.0 + exp(-SumH[p][j])) ; } for( k = 1 ; k <= NumOutput ; k++ ) { /* k loop over output units */ SumO[p][k] = WeightHO[0][k] ; for( j = 1 ; j <= NumHidden ; j++ ) { SumO[p][k] += Hidden[p][j] * WeightHO[j][k] ; } Output[p][k] = 1.0/(1.0 + exp(-SumO[p][k])) ; Error += 0.5 * (Target[p][k] - Output[p][k]) * (Target[p][k] - Output[p][k]) ; /* Sum Squared Error */ } } I'll leave the reader to dispense with any indices that they don't need for the purposes of their own system (e.g. the indices on SumH and SumO). The next stage is to iteratively adjust the weights to minimize the network's error. A standard way to do this is by 'gradient descent' on the error function. We can compute how much the error is changed by a small change in each weight (i.e. compute the partial derivatives dError/dWeight) and shift the weights by a small amount in the direction that reduces the error. The literature is full of variations on this general approach - I shall begin with the 'standard on-line back-propagation with momentum' algorithm. This is not the place to go through all the mathematics, but for the above sum squared error we can compute and apply one iteration (or 'epoch') of the required weight changes DeltaWeightIH and DeltaWeightHO using Error = 0.0 ; for( p = 1 ; p <= NumPattern ; p++ ) { /* repeat for all the training patterns */ for( j = 1 ; j <= NumHidden ; j++ ) { /* compute hidden unit activations */ SumH[p][j] = WeightIH[0][j] ; for( i = 1 ; i <= NumInput ; i++ ) { SumH[p][j] += Input[p] * WeightIH[j] ; } Hidden[p][j] = 1.0/(1.0 + exp(-SumH[p][j])) ; } for( k = 1 ; k <= NumOutput ; k++ ) { /* compute output unit activations and errors */ SumO[p][k] = WeightHO[0][k] ; for( j = 1 ; j <= NumHidden ; j++ ) { SumO[p][k] += Hidden[p][j] * WeightHO[j][k] ; } Output[p][k] = 1.0/(1.0 + exp(-SumO[p][k])) ; Error += 0.5 * (Target[p][k] - Output[p][k]) * (Target[p][k] - Output[p][k]) ; DeltaO[k] = (Target[p][k] - Output[p][k]) * Output[p][k] * (1.0 - Output[p][k]) ; } for( j = 1 ; j <= NumHidden ; j++ ) { /* 'back-propagate' errors to hidden layer */ SumDOW[j] = 0.0 ; for( k = 1 ; k <= NumOutput ; k++ ) { SumDOW[j] += WeightHO[j][k] * DeltaO[k] ; } DeltaH[j] = SumDOW[j] * Hidden[p][j] * (1.0 - Hidden[p][j]) ; } for( j = 1 ; j <= NumHidden ; j++ ) { /* update weights WeightIH */ DeltaWeightIH[0][j] = eta * DeltaH[j] + alpha * DeltaWeightIH[0][j] ; WeightIH[0][j] += DeltaWeightIH[0][j] ; for( i = 1 ; i <= NumInput ; i++ ) { DeltaWeightIH[j] = eta * Input[p] * DeltaH[j] + alpha * DeltaWeightIH[j]; WeightIH[j] += DeltaWeightIH[j] ; } } for( k = 1 ; k <= NumOutput ; k ++ ) { /* update weights WeightHO */ DeltaWeightHO[0][k] = eta * DeltaO[k] + alpha * DeltaWeightHO[0][k] ; WeightHO[0][k] += DeltaWeightHO[0][k] ; for( j = 1 ; j <= NumHidden ; j++ ) { DeltaWeightHO[j][k] = eta * Hidden[p][j] * DeltaO[k] + alpha * DeltaWeightHO[j][k] ; WeightHO[j][k] += DeltaWeightHO[j][k] ; } } } (There is clearly plenty of scope for re-ordering, combining and simplifying the loops here - I will leave that for the reader to do once they have understood what the separate code sections are doing.) The weight changes DeltaWeightIH and DeltaWeightHO are each made up of two components. First, the eta component that is the gradient descent contribution. Second, the alpha component that is a 'momentum' term which effectively keeps a moving average of the gradient descent weight change contributions, and thus smoothes out the overall weight changes. Fixing good values of the learning parameters eta and alpha is usually a matter of trial and error. Certainly alpha must be in the range 0 to 1, and a non-zero value does usually speed up learning. Finding a good value for eta will depend on the problem, and also on the value chosen for alpha. If it is set too low, the training will be unnecessarily slow. Having it too large will cause the weight changes to oscillate wildly, and can slow down or even prevent learning altogether. (I generally start by trying eta = 0.1 and explore the effects of repeatedly doubling or halving it.) The complete training process will consist of repeating the above weight updates for a number of epochs (using another for loop) until some error crierion is met, for example the Error falls below some chosen small number. (Note that, with sigmoids on the outputs, the Error can only reach exactly zero if the weights reach infinity! Note also that sometimes the training can get stuck in a 'local minimum' of the error function and never get anywhere the actual minimum.) So, we need to wrap the last block of code in something like for( epoch = 1 ; epoch < LARGENUMBER ; epoch++ ) { /* ABOVE CODE FOR ONE ITERATION */ if( Error < SMALLNUMBER ) break ; } If the training patterns are presented in the same systematic order during each epoch, it is possible for weight oscillations to occur. It is therefore generally a good idea to use a new random order for the training patterns for each epoch. If we put the NumPattern training pattern indices p in random order into an array ranpat[], then it is simply a matter of replacing our training pattern loop for( p = 1 ; p <= NumPattern ; p++ ) { with for( np = 1 ; np <= NumPattern ; np++ ) { p = ranpat[np] ; Generating the random array ranpat[] is not quite so simple, but the following code will do the job for( p = 1 ; p <= NumPattern ; p++ ) { /* set up ordered array */ ranpat[p] = p ; } for( p = 1 ; p <= NumPattern ; p++) { /* swap random elements into each position */ np = p + rando() * ( NumPattern + 1 - p ) ; op = ranpat[p] ; ranpat[p] = ranpat[np] ; ranpat[np] = op ; } Naturally, one must set some initial network weights to start the learning process. Starting all the weights at zero is generally not a good idea, as that is often a local minimum of the error function. It is normal to initialize all the weights with small random values. If rando() is your favourite random number generator function that returns a flat distribution of random numbers in the range 0 to 1, and smallwt is the maximum absolute size of your initial weights, then an appropriate section of weight initialization code would be for( j = 1 ; j <= NumHidden ; j++ ) { /* initialize WeightIH and DeltaWeightIH */ for( i = 0 ; i <= NumInput ; i++ ) { DeltaWeightIH[j] = 0.0 ; WeightIH[j] = 2.0 * ( rando() - 0.5 ) * smallwt ; } } for( k = 1 ; k <= NumOutput ; k ++ ) { /* initialize WeightHO and DeltaWeightHO */ for( j = 0 ; j <= NumHidden ; j++ ) { DeltaWeightHO[j][k] = 0.0 ; WeightHO[j][k] = 2.0 * ( rando() - 0.5 ) * smallwt ; } } Note, that it is a good idea to set all the initial DeltaWeights to zero at the same time. We now have enough code to put together a working neural network program. I have cut and pasted the above code into the file nn.c (which your browser should allow you to save into your own file space). I have added the standard #includes, declared all the variables, hard coded the standard XOR training data and values for eta, alpha and smallwt, #defined an over simple rando(), added some print statements to show what the network is doing, and wrapped the whole lot in a main(){ }. The file should compile and run in the normal way (e.g. using the UNIX commands 'cc nn.c -O -lm -o nn' and 'nn'). I've left plenty for the reader to do to convert this into a useful program, for example: Read the training data from file Allow the parameters (eta, alpha, smallwt, NumHidden, etc.) to be varied during runtime Have appropriate array sizes determined and allocate them memory during runtime Saving of weights to file, and reading them back in again Plotting of errors, output activations, etc. during training There are also numerous network variations that could be implemented, for example: Batch learning, rather than on-line learning Alternative activation functions (linear, tanh, etc.) Real (rather than binary) valued outputs require linear output functions Output[p][k] = SumO[p][k] ; DeltaO[k] = Target[p][k] - Output[p][k] ; Cross-Entropy cost function rather than Sum Squared Error Error -= ( Target[p][k] * log( Output[p][k] ) + ( 1.0 - Target[p][k] ) * log( 1.0 - Output[p][k] ) ) ; DeltaO[k] = Target[p][k] - Output[p][k] ; Separate training, validation and testing sets Weight decay / Regularization But from here on, you're on your own. I hope you found this page useful... - http://www.cs.bham.ac.uk/~jxb/NN/nn.html
  14. Hey, I was able to get this program started on my Samsung Behold using TK. It works fine just a little laggy (connection wise). For more info check out how easy it is: http://www.samsung-behold.com/new-posts-third-party-storm-software/ how to use TK: http://www.samsung-behold.com/how-to/troubleshooting-tk-checklist/
  15. BoyZ in the Hood
  16. Samapico, I looked through the list in that link and I laughed when I saw.... Working Mother LoL.
  17. I had this problem on my computer and fixed it. Here's "Why" and "How" : Why: Some keys use the same circuit, therefore more than 1 key on the same circuit can't be used at the same time, and only 1 key is "pressed," according to the computer. How: Press Num Lock and use the number pad as your arrows. Voila, fixed. It feels wierd for the first 10 minutes but after that you'd just wonder why you ever used the conventional arrows. o.O.. To make it a bit easier, just change "down" to 5 instead of 2 in your Keyboard/Controlls setup in Continuum. Up: 8 Down: 5 Left: 4 Right: 6 It's a bit more simple because it's so spacious. Enjoy!
  18. Xog

    Word Association

    your vag skunks
  19. Xog

    Hs is Boring

    Nah! Picano was his original name, then he met megaman.exe and joined P.E.T. with Fadark i think was his full name. Then they all had the .exe tags on their names, picano-san.exe, fadark.exe, i think funkmastaD made one too, and swift and nopy, i know I made one lol. TOO COOL did as well. this was all before hyperspace was even a server and we chilled in ?go lasertag in trenchwars lol we even have our own ?sheep from the Empyreal wars with !@#$%^&*sing (you probably won't understand it)
  20. Xog

    Hs is Boring

    I'm a vet too, I just haven't really played in HS since before the hypertubes were implimented. i miss the dreamteam from lasertag.. Fadark.exe picano-san.exe megaman.exe Xog Nike G u R L 637 (wow how'd I remember all the caps and numbers !?!?) FunkmastaD Nopy Swift Warrior so many others :< RIP LASERTAG edit: hey is my house still in hstown? was picano's secret catgirl private arena removed?
  21. http://img153.imageshack.us/img153/77/contkn6.jpg Not working. Yes, I'm connected to the internet, otherwise I wouldn't be posting here.
  22. I traced to SSCX Warzone CTF just in case it can help any.. hopone seems to be f'd. Tracing Zone: SSCX Warzone CTF IP Address: 66.36.241.110 Hop IP Address Host Name Loss(%) Min/Avg/Max Ping --- --------------- -------------------------------- ------- ---------------- 1 10.4.26.1 0.0 0 0 0 2 10.4.199.253 0.0 0 0 0 3 38.117.200.206 0.0 0 0 0 4 38.112.26.245 fa0-2.na01.b000953-0.jfk02.atlas 0.0 0 0 0 5 38.20.32.237 vl3910.mpd02.jfk02.atlas.cogentc 0.0 0 10 190 6 154.54.5.230 te7-2.ccr04.jfk02.atlas.cogentco 0.0 0 10 190 7 154.54.2.133 te7-3.ccr02.dca01.atlas.cogentco 0.0 0 20 220 8 154.54.7.230 vl3493.mpd01.dca02.atlas.cogentc 0.0 0 10 220 9 154.54.5.66 vl3497.mpd01.iad01.atlas.cogentc 0.0 0 20 200 10 154.54.12.30 gblx.iad01.atlas.cogentco.com 0.0 0 10 180 11 66.36.224.170 ge2-2.core2.dca2.hopone.net 99.0 0 0 0 12 154.54.5.226 vl2.msfc1.distb2.dca2.hopone.net 19.0 0 10 150 13 66.36.241.110 sls-cb9p7.dca2.superb.net 0.0 0 0 10 Cycles: 105
  23. Hi, Been playing for quite a while, never really had any problems with the zones and connections. Just ran into one recently. Some info: I work at a lawfirm, so our security is pretty !@#$%^&*-clinching. I have no control over port-forwarding so I'm screwed if it's my only option. The problem: When I click update zone list (here at work) and click Download, it can't connect to the links. However, my Tracert's trace perfectly fine, but all the zones are red. I cannot connect to any zones (anymore*). I recieve no pings and I recieve no # of players. A couple weeks ago, Hyperspace changed something. This used to be the only zone that was green after I installed continuum. Now it's red again, and I can't play at all while I'm here at work. Now, its time for you to fill in The Answer:
×
×
  • Create New...