Milky Way
                                image  MultiverseSocial.com Milky Way
                                image  
A picture of me. Tony's Blog  
21/10/21
You found me! Hi,   my name is Tony.  I worked as a hardware/software Programmer for many years. I started programming in basic on the Vic20. Spent a dozen years programming the Amiga in C/68k Assembler.  I do low level firmware and program at the chip level.  I do Unix system programming(posix)/C sockets/Linux/Windows device drivers. I do debugging down to the Kernel.

 Aug 7 2022:
Niko was talking about generating randoms. This is my java 64bit random # generator.
It is based of a 32 bit version written my Leo Schwab back around 1986.
We used to have a saying "Thank God for Leo Schwab and all the little Swabbies!"


and(long Range)
static long lHyperRandomSeed=(long)System.currentTimeMillis();

public long lRand(long Range)
{
lHyperRandomSeed <<= 1; // Shift everything left 1 bit
if(lHyperRandomSeed < 0xffffffff) // See if we overflow 32 Bits
lHyperRandomSeed ^= ((0x1D872B41 << 31)| 0x1D872B41); // Exclusive OR with an arbitrary value
lHyperRandomSeed &= 0x00000000ffffffff; // Coherce to 32 bits
return (lHyperRandomSeed % Range); // return range modulo
}



 Aug 7 2022:
 Added these control bits for HyperView3.0

     // HyperView main enable bits. Some of these are deprecated
//HyperView->flags1

#define int DISPLAY_ENA          = BIT1    // before we overflow 32 bits
#define int CONNECT_ENA          = BIT2    // will split some of these
#define int KEYBOARD_ENA         = BIT3   
#define int CONSOLE_ENA          = BIT4    // Enable stdin
#define int TEXT_ENA             = BIT5    // Enable System Text rendering/
#define int GOB_ENA              = BIT6    // Enable Bittable Gobs()
#define int LINE_ENA             = BIT7    // Enable System lines buffer.
#define int MENU_ENA             = BIT8    // Enable X style pop menu.
#define int OUTLINE_ENA          = BIT9    // draw outline in current FG Color
#define int LAYER_SWAP_ENA       = BIT10   // enable Component Layering.
#define int KEEP_RUNNING         = BIT11   // Asynchronons non blocking quit signal bit.
#define int BACKGROUND_ENA       = BIT12   // Enable background image rendering.
#define int DEBUG_ENA            = BIT13   // Enable running under debugger
#define int MENU_ACTIVE          = BIT14   // render pop up menu.
#define int RUN_EXCEPTION        = BIT15   // Oh Oh! Trigger exception handler.
#define int BACKGROUND_LOADING   = BIT16   // Background Image is still loading.
#define int IMAGE_LOADING        = BIT17   // Will do extra rendering while loading.
#define int GOB_BOUNCED          = BIT18   //  It bounced, invoke Spline momentum/direction instructions if enabled.
#define int TITLEBAR_ENA         = BIT19   // Enable the titlebar at the top.
#define int CONNECTED            = BIT20   // --  You are connected to a server.
#define int CONNECTING           = BIT21   // you are in the process.
#define int RUNSTACK_TIMER       = BIT22   //  How long to run the display at this display stack.
#define int GURU_ENA             = BIT23   // OH OH! someting failed.
#define int GEM_REFRESH          = BIT24   // Redraw the "Connect Gem" color on the titlebar when connection state changes.
#define int OBSERVER_ECHO        = BIT25   // print all Image Obeserver messages to stdout.
#define int INIT_OVERRIDE        = BIT26
#define int SKIP_RESTORE         = BIT27   // flag a 1 display frame
#define int PARENT_IS_HYPERFRAME = BIT28   // If I know the parent Component is a HyperFrame, I can call  it's methods.
                                                 // Note: HyperFrame is deprecated for java 12.
                                                 //
// Wait for text to time out. Note this will run for()
// If the text does not actually time out and does not cancel
// a regular runView timeout
#define int WAIT_TEXT            = BIT32

//
// A note on USE_BLIT_OPTIMIZE.
//  This controls how the background clip restore blit is done.
// 1) Restore only the damaged part of the display caused by component moves and layer changes.
// 2) Stamp the entire display in one blit and then do the component render.

// #1 is generally faster as blits are small things like sprites so you are restore
// #2 However, if you have a LARGE number of gobs, it will be quicker to stamp the whole background
// than to loop through and restore a pile of individual clips.
// FTM:You will have to determine which is better for you.
//  TODO: Make this automatic via time based calculation.
//

// Main View Flags 2
   HyperView->flags2

#define int BACKGROUND_UNDERLAP  = BIT1
#define int BACKGROUND_STRETCH   = BIT2
#define int EXTERNAL_SERVER_ENA  = BIT3
#define int SERVER_SPAWNED_VIEW  = BIT4
#define int GEM_UPDATE_OVERRIDE  = BIT5
#define int ERASE_VIEW           = BIT6  // Erase signal
#define int USE_BLIT_OPTMIZE     = BIT7 // Blit clip only bit (as opposed to whole display) What you are
#define int INITIALIZE_REFRESH   = BIT8






 Aug 32022 Latest structures for the ContainerChannel.

struct Color
{
Uint32 argb;
};

struct ColorOperation
{
struct Color foreground_color;
struct Color select_color;
void *transform;
};

struct Point
{
struct Linkable;
int x,y,z;
struct ColorOperation *color;
};

struct GraphPoint
{
struct Point point;
struct ColorOperation *color;
};

struct Window
{
struct Linkable node;
SDL_Window *base;
void *parent_screen;
Uint32 x,y,z,x2,y2,z2;
Uint32 width,height;
};

struct Screen
{
struct SDL_Rect rectangle;
SDL_renderer *screen;
};

struct Graphics2D
{
struct Linkable node;
SDL_Window *main_window;
SDL_Renderer *renderer;
int x,y,z;
int width,height,depth;
struct HyperLinkedList blitterList;
Uint32 fgPen;
Uint32 bgPen;
};

  

 July 30 2022
 Uploaded these albeit sad docs :O
Abort.html
TEST LINK


 July 18:
 
Working on ContainerChannel  A Container is a displayable area of video RAM that you can put various displayable  Components into.

struct ContainerChannel
{
struct Channel base;
SDL_window *window;
SDL_surface *gpu_ram;
SDL_texture *heap_argb_ram;
HyperLinkedList *orange_list;
int *pixels;
};




 July 12:
 Still working through the Channel startup.
Added a pile of new structures many of which are java analogues.
Color, Palette, Graphics2D, Point, Clip, CarteaseanPoint, Image, TitleBar, Gadget,
Dispatch,GraphPoint, Graph,
Buffer (as in NIO),

//===============
// latest Base Channel structure

struct Channel
{
struct Linkable        node;
int                    channel_flags;
int                    type;
int                    io_flags;
int                    pid;
long                   atomic_id;
long                   ipv6[2];    //
struct Signal          ch_signal;
char                   name[64];  
struct RunInfo         run_info;
struct Signal          signal;
struct HyperLinkedList channel_link; //-----
struct HyperLinkedList io_list;
struct NetXecThread    io_thread;
SDL_mutex              *io_lock;
void (*paint)(struct Graphics2D);  // The lone function pointer
};


July 10:
 Programming groups I am in.
 Simple Direct Media Layer
 Videolan VLC/LibVLC
 NVIDIA/CUDA
 W3 Consortium
 Seamonkey
 
Not including ones I my follow. Everything I use I have compiled including the compiler.
 ATM I am using version 12. I was patched up to 13.03, but there was serious issues.
 Tools Page

 Latest Channels structure
 

June 26:
Tons of changes.
Made several big changes to the IOBLock interface.

I am right at the point where I am trying to link.
Fixing invalid offsets and dangling references.

System Channels:
What are they?? SystemChannels are the basic channels
that give you direct access to the underlying hardware.
Any Channel you create you will assemble out of system channels.

Together all system channels combined  give you access to 100% of the
 allowable underlying device control. There are at this juncture 17 SystemChannels.

 SystemChannel;       AKA NetXecChannel. Process/Thread/RunInfo monitor and control
 DeviceChannel        Open and control underlying hardware drivers.
 MemoryChannel        5 types of memory Heap,Stack,GPU,IPC,Cloud, and Registry.
 ThreadChannel        Creates/dispatches thread on CPU or GPU.
 SignalChannel        Creation & assignment of hardware bases signal bits.
 ArbitrationChannel   Monitor/Lock arbitrator.
 MediaChannel         Audio mp3/wav/Blit Video/VLC driver      
 ClockChannel         Asynchronous 1 second clock & timer interface
 ContainerChannel     Screen/Window graphics/svg load & save/blitter functions.
 URLChannel           URI fetcher
 ServerChannel        Non blocking single process select socket server.
 ProcessChannel       Background number crunching.
 StateMachineChannel  function/state allocation and control
 CLIChannel           command line interface
 CompilerChannel      Multi language compile/link
 DebugChannel         debugger interface
 NimosiniChannel      AI channel

Signals:
 I am an NVIDIA developer. I spent many weeks going over windows/posix threads and the signals
 incongruities. I had designed an interface that had 15 bits of resolution compressed to 8 bits.
 But when I start working on CUDA, I realize it had over 1000 in GPU threads!!!
 That requires a hell of a lot more signals than 15!
 So 32 system signals at the operating system level.
 Many of these are already reserved for mouse/keyboard/windows etc.
 Will have to test how many are left over. 1 is enough.

 512 extended signals via a SignaGroup. This is a compromise as opposed to always
 allocating 1024.
 


New structure
#define SIGNALGROUP_BLOCK 0x000000ff;

struct Signal
{
char *name;
int signal;
int signal_enable;
void *reply_function;
struct Signal *next_signal;
};

   // revamped
struct SignalGroup
{
struct Signal  *signal[
SIGNALGROUP_BLOCK];
void           *reply_function [
SIGNALGROUP_BLOCK];
struct SignalGroup *next_signal_group;
};

 ATM Mashing/compiling/linking :)



Code Babble.Sat June 11 2022
 I have done a pile of work this week. Went through every single library I am using one by one
 and there are many. Made a test program and one by one linked/tested and dumped the ones that didn't work.
 plus weeded out all yet to to be documented deprecated functions in SLD and have all the latest defs patched in. Spent much effort with SDL_Video.  Not ready yet as most code is  transitioning from Version 1.
 Spend much of yesterday working on ContainerChannel and ThreadChannel.

 System Channel refactoring.

There are now > 4 < Channel types for > Web 4 <
BaseChannel
All Channels encapsulate a base channel.

SystemChannel
There are 32 System channels.
These Channels are generally Singletons. (but don't have to be)



CompositeChannel,
SubChannel

 
After much time mulling over everything, I have decide to limit



 New 32 bit Signal structure


struct Signal
{
char *name;
int signal;
int signal_enable;
void *reply_function;
};

//-- Drastically increase the amount of signals for SignalGroup to accomodate GPUChannal.

   // revamped
struct SignalGroup
{
struct Signal  *signal[0xff];
void           *reply_function [0xff];
};

struct NetXecSignal
{
int signal;
int totalbits;
int maxbits;
};

 This is how the runtime stack layer init() wraps the SLD thread creation call for init()
 and pause() states.  This hopefully will tie up the loose ends of the Posix committee relevant to signal
 dispatch on concurrent waiting threads. Right now they all wake up. 
 I started working with CUDA and have the latest tools and libraries. Going to map out a common interface
 with ARM8/Adreno. Have that all installed.  Got VLCChannel linked and compiled. Recompiled vcl.
failed.   Still running in 32 bit mode.   Can't afford the time to port over the thousands of object necessary.  Will continue to drive VLC as a process in 32 bit till the 64 bit libVLC upgrade is stable.


 Parameter specific function pointer code trick.

char buffer[256];
// tVoid is an integer pointer for 32 bit signals

int function1(void *tVoid)
{
int pid =getpid();
  tVoid =(void *)&tFlag;
SDL_threadID tSDL_ThreadID1 =  (SDL_threadID)SDL_GetThreadID(NULL);
   sprintf(buffer,"proc: %08x SDL_ThreadID %08x ",pid,tSDL_ThreadID1);
}

typedef struct
{
int (*function1)(void *);
}int_void_FunctionPointer;

static int ThreadNexus(void *tThreadMonitor)
{
struct ThreadCreationRequest*tThreadCreationRequest=
(struct ThreadCreationRequest *)tThreadMonitor;   
int_void_FunctionPointer ivfp = {function1};

  sprintf(buffer," Thread Monitor %08x",tThreadMonitor);
  ivfp.function1(tThreadMonitor);
  return 1;
}
  
 
w WW
 Graphics Hardware Layers:
 We have 2 types of displayable "Bitmaps"
1)An unprocessed main memory raster comprised of 32 bit ARGB data.
     "This is the analogue to a java MemoryImageSource."

2)A copy of this data in GPU memory that has been scaled according
 to the width/height resolution/color model and pixels per inch of the
 target display. These are your accelerated images.
  The function of the   GPUChannel is to allow access to the concurrent
  capabilities of the GPU.

 Off to work on ThreadChannel more.   My latest NVIDEA Developer upgrades have
wiped out my shadowplay so no more video till I get a new machine.
 Machine is available. I just have to go get it from my Engineer. I also need
 a new monitor for my ARM8/SnapDragon box but that's another story :)

 



What is the Channel Paradigm and why.

The Channel Paradigm is an abstraction that completely
homogenizes all hardware and software.
It is in this regard like a device.
All Channels have these capabilities.

1) Any Channel can join any other Channel
2) Any Channel can listen to any other Channel.
3) Any Channel can write to any other Channel.
3) All Channels are Non blocking and reentrant and can never deadlock.
4) Channels are only written in C or Assembler.
5) All Channels extend the BaseChannel
6) When complete Channels will be completely "Machine generated"
7) Final stage the AI will create Channels on command.

Why?

  • So that the hardware/software runs at its maximum unencumbered capacity for the new AI revolution.
  • So we finally have "Write once and run anywhere."
  •  

 



This is my social interaction/media site.   I am one person plugging away at this. This is going to be my OWN Social interaction/media site.   I am writing the back end in C language and assembler.  I also have written a substantial
 java API.   I have been working on it since Java versions
1. Tony's little java et al group on FB. Here are some docs.HyperView docs

I have been working on this and the back end for over a year.
I work full time in the operating room of the cardiac ICU of the 2nd largest Hospital in N.A.; I do this when I am not working.
The back end is written in C and utilizes the Common Gateway Interface.   On Facebook everyone gets a timeline. On MultiverseSocial, you get 3D fractal Planet located in a section of the Milky Way.

I am working primarily on 5 things. 
1) Java 12 upgrade for java HyperView & Component based Applet replacement classes.
2) Tony's Channel Paradigm and > Web 4 <
3) BML based Web browser & concurrent Apache mod written in C/Assembler.
4) BQL data base.
5) HTML5 HyperView interface in C driven CGI/Javascript.
5) Mongoose-C
Python replacement project.

Orthographic random planet map.
Made with HyperView2.99
Fractal Planet Image

 May 16 2022   
 I have been cobbling the Channels together and I am right at the point where I am ready to allocate and start the main IOBlock. This is comprised of 4 threads.

1) NetXecChannel System thread
2) Join/Leave. Channel thread
3) IO ProcessChannel thread
4) Exception Thread
---
Yesterday I was working out the details of my "Mongoose-C" version of Python List. HyperLInkedList.c is a doubly linked list which is WAY more flexible that Python List[] Traversing a List would be quicker I think if Python wasn't 45K slower than C and loosely cast. This is what I came up with so far.


// Mongoose-C List by Tony Swain

#include <stdio.h>
#include "HyperLinkedList.h"
#include "structs.h"

union ListData
{
char c;
int i;
float f;
double d;
char *string;
struct HyperLinkedList *hyperlinkedlist;
}DataList;

struct List
{
struct Linkable node;
union DataList *data_list;
};








 Blather of the day for April 30
Just for the sake of doing something different,
working on GPSCHannel
Added great circle, ReadGPS (stdin, and parse out NMEA record into relevant structure.


  Blather of the day for April 29
After 2 days got MemoryChannel and ThreadChannel to link.
Wrote a new CompressionChannel which does LZ77 compression.
It it is a hack of filgrim which is a hack of pix.c which doesn't exist anymore. This was originally part of the PNG image compression methods. I intend to use it to compress the BLM data and pack it all into one compressed file.
I inadvertently also added the PNG save function, a polynomial plot function, and some raster draw function.
Will move these over to the ContainerChannel which already has SVG save/load routines.   Integrated antlr lib for lexical speech parsing for the NimosiniChannel.
Nimosini is the AI that will drive Multiversesocial.com
I only have speech working in Edge ATM.
hmm have to get to work....











The > Web 4 <  Revolution.
Because Tony clings to the heretical view that:
 Web 3 is a Corporate Neutered Sandbox that is not optimal.


What is different and why is it better?
Differences:
  • Channel driven
  • Binary BML/BQL dynamic hardware addressing
  • Non blocking
  • Re-entrant
  • Hardware/software independent
  • AI driven.
  • Open Source. Not open source? Not > Web 4 <
  • 100% C or Assembler
  • Machine generated code

  Programmers rest in hammock ;)



March 30/22
It has been a good week. Piles of things done from recompiling gcc to sorting out the nauseatingly massive pile of dependencies to getting X64 working with the latest SDL libraries. Re compiled those fresh off the debug queue.
Have these Channels in various stages of assembly....

  • This has taken me a MANY hours to figure out.
  • There are several issues and factors at play.
    I spent over > 12 weeks < going through the posix,MSYS2,mingw64, and windows thread code from kernel to C library. After that I am impressed at how it is all put together.
  •    However, it suffers from several issues. Let's start with signals. 
    Trust me; in windows, forget it.Not even close to posix compliant. 
    Pretty much segv,sig_int.  Trust me don't try to use any others; you will encounter..
    Tony Term:
    "The Posix Multiple Thread Dangling Signal Catastrophy"
  • ie:
    If multiple threads are waiting on the same signal
    Which thread gets the CPU? >> Posix committee Undefined <<
    Current model
    "Single Signal Multiple Thread Wake up" prone to race and deadlock.

    SignalChannel addresses this and adds
    > "Single Thread signal Registration" <

    Also a "ThreadGroup" that allows for 512 blocks of signals "Per thread".
    Note: there is a limit of 31 ThreadGroups.
    Maximum system signals therefor are > 15,872
    Note** Note ** Note**
    STILL "Single Signal Multiple Thread Wake up"

    BUT NetXec will take the Thread number and divide the "Multiple threads that woke up" time slice by this value.
    NetXec will "randomly" run each thread in turn for a time that equals the aggregate average runtime
    of the thread/the number of threads. In other words.
    Multiple threads waiting on one signal will all run slower because the normal time slice will be rationed
    among all the threads.  BUT; this will increase and improve the mutiplexing granularity on such multiply signaled thread.

    NOTE:  the signal itself is > 15 bits packed/compressed to 8 <

    Thread Priority Levels.
    Current model:
    Forget it! Don't mess with it; you WILL be sorry! :)
    NetXec:
    Fine grained/precise  control of all thread time slices.
    All threads are preemptive and non blocking.
    Now; lets see if it um actually works ;)


  • Galaxy/Solar System
                                          image.
     

the
                                      sun



New This many servedHTML5-h
                                Connectivity,CSS3,Device,Graphics,Multimedia,Performance,Semantics
                                Storage & the Kitchen sink    Login


sdasdasd