diff --git a/.gitmodules b/.gitmodules deleted file mode 100644 index e69de29..0000000 diff --git a/AP_compsci.pdf b/AP_compsci.pdf new file mode 100644 index 0000000..0b95d0b Binary files /dev/null and b/AP_compsci.pdf differ diff --git a/AP_physics.pdf b/AP_physics.pdf new file mode 100644 index 0000000..0dd69f3 Binary files /dev/null and b/AP_physics.pdf differ diff --git a/apexam.pdf b/apexam.pdf new file mode 100644 index 0000000..0dd69f3 Binary files /dev/null and b/apexam.pdf differ diff --git a/article.php b/article.php index 16d3b13..a95a04c 100644 --- a/article.php +++ b/article.php @@ -4,18 +4,49 @@ $slug = $_GET["slug"]; $servername = "localhost"; $username = "local"; $password = ""; +$dbname = "articles"; +$conn = new mysqli($servername, $username, $password); -$article_query = "SELECT * FROM articles WHERE slug = ?"; -//$keyword_query = "SELECT * FROM keywords WHERE id = ?"; +if (!function_exists('mysqli_init') && !extension_loaded('mysqli')) { + echo 'We don\'t have mysqli!!!'; +} else { + echo 'Phew we have it!'; +} +$article_query = "SELECT title, author, description, keywords, updated FROM articles WHERE slug = '?'"; +$keyword_query = "SELECT keyword FROM keywords WHERE id = ?"; + +if ($conn->connect_error) { + die("Database connection failed: " . $conn->connect_error); +} + +if (!($stmt = $conn->prepare($article_query))) { + echo "Prepare failed: (" . $mysqli->errno . ") " . $mysqli->error; +} + +$stmt->bind_param("s", $slug); +$stmt->execute(); +$article = $stmt->get_result()->fetch_assoc(); + +$stmt = $conn->prepare($keyword_query); +$stmt->bind_param("i", $article['keywords']); +$stmt->execute(); +$keywords = $stmt->get_result()->fetch_assoc(); + +$conn->close(); + +$article_content_file_location = "/var/www/html/articles/" + $slug; +$article_content_file = fopen($article_content_file_location, "r") or die ("404 File not found!"); +$article_content = fread($article_content_file, filesize($article_content_file_location)); +fclose($article_content_file); ?> - - + + <?php echo $article['title'] ?> - Lijero @@ -40,11 +71,26 @@ _paq.push(['enableLinkTracking']);
-

+

+
+ +
+ + + \ No newline at end of file diff --git a/articles/beyond-operating-systems/index.xhtml b/articles/beyond-operating-systems/index.xhtml deleted file mode 100644 index 9815219..0000000 --- a/articles/beyond-operating-systems/index.xhtml +++ /dev/null @@ -1,187 +0,0 @@ - - - - - - - - - - - Beyond Operating Systems - Lijero - - - - - - - - - - -
-
-

Beyond Operating Systems - Abstracting Over Networks

- -
-

Introduction

-

- I've come to get my crazy idea smashed apart. Or open an innovative discussion! Whatever it is, I don't wanna waste your time so I'll be brief. Let me know if anything needs clarification, since I'm shrinking a couple notebooks into one article. -

-

Programs running on the internet itself

-

- I want to write software for the whole internet! That can mean a lot of things, though: -

-

-

- Operating systems mean we can write software for any computer but I want to write software for every computer at once! A globally distributed operating system. -

-

Building up to a global OS

-

- It's gonna take a lot of layers of abstraction. Bear with me, I'll get there eventually! -

-

Step 1: The Resource Graph

-

- If you're going to write a distributed operating system, it better not matter where data comes from. So unify every resource, from memory to files to HTTP calls, into a resource graph. That said, you can't make assumptions about your surroundings either: implicit dependencies have got to go. -

-

Unifying resources

-

- We still need to know what data is, which can be pretty tough when you don't know where it's coming from. The solution is an extremely strong type system. The type system tells software what data actually is. That said, different resources have different properties, so this has to be expressed in the type system as well. -

-

Killing implicit dependencies

-

- This is where the resource graph comes in. A graph is just a general way to structure data. It's basically a generalized filesystem. The important difference is that there is no root. We get data around by building, rebuilding, and passing resource graphs, rather than ACL on a general filesystem or whatever. We pass around references. -

-

Programming with a resource graph

-

- Today's languages are not well-suited to this model of data. Neither are computers: you can't have a pointer to an HTTP resource. Also, it needs to be enforced, so RIP native binaries. Not that you could have them anyway, since the internet runs on a loot of architectures. Instead, you'd need a bytecode interpreter that can actually handle it all natively. -

-

- Also, I swear I am not a Lisp shill and I had no intention of it, but Lisp happens to be really good at dealing with linked data structures. It'd probably be really convenient for dealing with this sort of thing. Though this system is strongly typed, unlike Lisp. -

-

There's more to resources than CRUD

-

- There needs to be an actual way to implement all these fancy types of nodes. And how can we not care about the source of something being HTTP if we need to make a query? The solution is simple: make every node a computation. And by computation I mean closure. -

-

- I mean, how did you think the nodes were implemented? Most nodes are actually a procedure to fetch the requested content. But you can also make nodes with inputs and outputs, such as an HTTP query. Of course, we need to know where we're getting the data from, so that must have been put in earlier. The resources with that information have been enclosed in the node. -

-

- Nodes are functions. They're partially applied with a portion of the graph they require to make closures. In fact, functions are nodes too. So are all processes. You pass in a parameter graph and get back data. -

-

- Now I hope it makes sense how we can have all the features of a kernel without syscalls. Your malloc or whatever is inclosed with the relevant information, including any syncronization references. -

-

Step 2: Computational Economy

-

- So what if we assign an expense to a function? We can count instructions, give them different expenses, and sum them up to get a final price. We can calculate the range of potential prices by calculating branches as lumps and adding them each time a branch is taken. From here, we can extend it to the cost of data (e.g. memory) by calculating quantity of data over computational time. Bandwidth costs are obvious too. This is how you can implement schedulers, when you expose the interpreter as a resource. -

-

What about the halting problem?

-

- Totality proofs (manual or automatic), and relating expense to the input. Where it cannot be proven, the maximum expense can be considered infinite. In many of those cases, it would make sense to put an upper bound on the amount you're willing to pay to run a function. -

-

Paying for our remote access

-

- Servers can also assign an expense to queries if you modified e.g. HTTP to do it. HTTP is just a kind of RPC anyway, and it now behaves exactly like one in our resource graph. Now you can pay them for your access to their resource graph! -

- We've now turned computational expense into a currency! -

- How cryptocurrencies are implemented is out of the scope of this article, but we can now turn this into one, and trade our currency for computation. -

- -But to continue, if accessing someone else's resources has a cost, we should probably be paying for it, right? Paying for goods and services. Seems familiar. Maybe our "money" really is just that-- a currency. Are y'all familiar with cryptocurrencies? Because we're well on our way to making one. Explaining how blockchains like bitcoin work is waay out of the scope of this post (or perhaps not-- but at any rate, it's too long to explain in this one), but to SUPER summarize it, we can use cryptographic mAgIc to sell portions of transactions and stuff with cryptographic proof, tl;dr a distributed currency. I'll have to explain how it works, and how you can make contracts to prevent scams and stuff, and how to deal with redundant computation in a different post, because you have no idea just how much further I can push this (i.e. I don't have time for that sort of digression). - -The end result is that your pair of computers can trade resources and do all sorts of crazy stuff all beneath the nose of the program itself. This can be generalized to a more sensible system by decoupling the e.g. local per-instruction expense and currency to allow competition between suppliers, but that's pretty complicated too, and I want to stick to the realm of abstractions, rather than defining the economic system. Anyway, from there you can just extend this abstraction over an entire network of computers, and buy/sell resources as needed. You're probably seeing a lot of issues at this point (e.g. what if the other computers lie, what if they drop my data, what about latency, what about network partitions, what if they steal my data, how do I deal with all of these transactions in a performant matter), but don't worry, I'll get to them later. Actually, there are going to be a /lot/ of places coming up where I'll have to get to it later, so I'm going to stop giving this notice, but I will cover them. Also, if you're thinking, "this keeps sounding less and less like osdev", you're right. I'm no longer strictly discussing operating systems-- I've generalized the concept of an operating system, which abstracts over hardware and other such things, to an entire network. - -Now that's a pretty big claim, but none of what I've said so far is all that impressive. Most of us have enough power on our computers not to need to buy it, and just doing these tasks the old-fashioned way is a heck of a lot easier for these benefits so far (though before I got so far out, the ideas still in the realm of typical operating systems is more justification than a lot of hobby OSes). The thing is, I'm claiming to abstract over the internet, but really all I've talked about is loaning resources and paying for data. Applications haven't changed that much, only their execution platform, and a couple of conveniences. For example, your HTTP server still holds all your data, you're just getting paid for holding it (though you can pay other people to act like a CDN). The issue is that while the computation is distributed and that space can be loaned, data is still fundamentally centralized. Sure, it's over a bunch of computers, but there's no concept of shared data. Your applications can run all over the internet, but not on it. - -

Running ON the Internet -- Fully distributed applications

-Now we're finally at my point. What I want is to be able to write fully distributed multi-user applications in a similar way to how we write standard applications now. There shouldn't be a server or host or central authority, you shouldn't have to deal with figuring out where data is stored, you shouldn't have to figure out the fine details of protocols, and such things. Publishing pages or files Web 1.0-style shouldn't require any hosting, and all data should receive the benefits of torrenting automatically. Interactive content (web 2.0, IRC, etc) should behave similarly. Games shouldn't need to worry about hosting (all multiplayer games should be as easy as LAN hosting is today, and large servers should be a hell of a lot cheaper i.e. free). A distributed data store. - -

Getting back to details

-If you can't think of a billion potential issues with this, I'm disappointed in you. But don't worry, I've reached peak abstraction, and now I can start working my way back down into details and issues. I'm probably going to go from easier issues to harder issues and low-level issues to high-level issues, just fyi. I know this is written like an FAQ but I keep building on important details here. - -* Aren't interpreters slow? How is it a good idea to run your entire operating system on them? -They are typically slow, but as they're the core of the entire everything, a lot of optimization would go into it. It'd probably be rather low-level bytecode, and have an extremely efficient interpreter. Some degree of JITting would probably occur, when there is very high confidence it won't open up a bug. Since resources are managed in a graph and implicit dependencies are impossible, I'd assume people could take advantage of these properties to achieve an extremely high level of parallelism, like they do in functional programming languages. - -* How would it be safe to run unknown code on your computer? -It's not, but you're probably doing it right now with your browser. There's the obvious distinction between going to trusted websites and getting pushed data from strangers, though. There's some inevitable risk in any internet connection, and the best you can do is minimize that risk. I would try to keep the inner interpreter extremely simple and small, perhaps even going as far as to formally verify it, to prevent exploits; JITting should be used with extreme caution; I would sandbox processes pushed to me when possible; and I would also try to take the precautions used by OSes such as OpenBSD. Overall though, I'm reasonably confident in it, and the complete lack of implicit dependencies makes it a hell of a lot more secure than running random executables, even sandboxed ones, and brings it closer to e.g. JS. The language wouldn't even deal with numeric pointers (to real memory, not as in the concept), to prevent accidental pointer arithmetic bugs. - -

How do I access raw resources from the interpreted language?

-The interpreter would pass in operations for raw resource access into the boot program, which can then create closures wrapping these resources (e.g. a memory allocator, a partition reader, other device wrappers) and keep building abstractions on these, with only the abstractions being included in the resource graph for normal programs. How these raw resources are provided is an implementation detail. It could potentially involve bootstrap stages and assembly output. - -

Okay, well how do these abstractions work? What is a graph node anyhow?

-Each node is essentially a function call, though most of the actual function calls would be optimized away. You pass in a graph with the correct data types, and assign the result to another local graph node. The closures are actually more like partially applied functions in this sense, since the graph has to be explicitly passed in and must have exactly the required functions and nothing more, but I call them closures because it "captures" the resource access primitives from the creator, and then gets given actual input later. Resources like an HTTP call are essentially just lazily-evaluated procedures. - -

How is it performant to have a program distributed over multiple computers? How does the network manage latency and throughput?

-Usually, it isn't. Loaning resources isn't the point, it's just a necessary abstraction to build the rest of the network, for things like contracts, and managing the distributed data store. Usually, the actual computation and its related data will be transferred as a block (lazily, of course) to the actual place of execution, and only independent batch jobs will be transferred. A computer would rarely actually pay for a computation unless necessary (think swap space), and will do computations locally when possible, and local-initiated computations always take priority over sold computations. There are only two purposes other than for internal infrastructure and massively distributed computation projects: conserving battery life, and accessing remote resources. Often it doesn't make sense to download large portions of a database when the content you actually want is a lot smaller, so you just pay someone who already has it to do it for you. If this is a common case and it gets overwhelmed, its own price goes up until someone else finds it cheaper to grab the data than pay for it, and then you have two cheap servers. They don't have to maintain a clone of the database obviously, but in most cases, for a commonly-accessed resource, it makes sense because you can resell (seed) the data yourself, or you'll want to retrieve it again and have it there. This is how static (as opposed to stateful, not dynamic) data propagates through the network. You can tell whether something is common by announcing you have it, and holding onto it. This means that the more popular a resource is, the faster and cheaper it becomes to retrieve because more people have it. If it becomes less popular, or otherwise overhosted, people will automatically drop it because it's no longer profitable to host. Additionally, the more people host something, the more likely there is a version close to you, which reduces latency. This is especially true of geographically popular data. Beyond that, it is trivial to chunk data, so that the end users receive all data like a bittorrent. Through these mechanisms, static data would be much faster than the normal internet, since all data behaves like a torrent or CDN. Stateful data still receives many of these benefits, but that'll have to wait for later. - -I had to take a break from writing this post, so hopefully I don't miss anything. I probably will, having lost my train of thought now. - -* Isn't it expensive to track computations like that / CPU cache & branch prediction? -You can calculate the length of each block in advance, and only need to add the sum of the block to the total when a branch is taken. Locally initiated code could under some circumstances be free and not need to be tracked at all. - -* Can this have any benefit to security, as far as local programs go? -Isolation of programs & capability-based security: no implicit dependencies or global resource graph to exploit. You can't traverse e.g. the whole filesystem because the concept of "the whole filesystem" isn't meaningful. - -

The implementation of distributed state

-Oh lord. Now I have to move onto the implementation of distributed state. I'm not really looking forward to this, since everything I've said so far is just the tip of the iceberg as far as details go. Most of my countless pages of notebook are trying to work around the insane difficulties of building a general case for distributed applications. Can't I just stick to hype? Shit. Man, and that's not even counting explaining the usual blockchain problems. - -

The guarentees of a distributed system

- What we're essentially looking at is an extremely complex internet overlay network implemented on top of the operating system I've described so far, to provide guarantees about data. We have opened the gates of hell. I suppose I should start off by listing the properties that such a network needs to be able to have: - -
- - - diff --git a/articles/testpost b/articles/testpost new file mode 100644 index 0000000..2eec599 --- /dev/null +++ b/articles/testpost @@ -0,0 +1 @@ +Test content diff --git a/design.pdf b/design.pdf new file mode 100644 index 0000000..c20ca0e Binary files /dev/null and b/design.pdf differ diff --git a/gk.jpg b/gk.jpg new file mode 100644 index 0000000..0940a6d Binary files /dev/null and b/gk.jpg differ diff --git a/menukill.xhtml b/index-old.xhtml old mode 100755 new mode 100644 similarity index 98% rename from menukill.xhtml rename to index-old.xhtml index 95b09aa..bfb47de --- a/menukill.xhtml +++ b/index-old.xhtml @@ -39,9 +39,6 @@
@@ -75,3 +72,4 @@ but this often comes at the cost of readability. My goal with this website is to + diff --git a/index.nginx-debian.html b/index.nginx-debian.html new file mode 100644 index 0000000..2ca3b95 --- /dev/null +++ b/index.nginx-debian.html @@ -0,0 +1,25 @@ + + + +Welcome to nginx! + + + +

Welcome to nginx!

+

If you see this page, the nginx web server is successfully installed and +working. Further configuration is required.

+ +

For online documentation and support please refer to +nginx.org.
+Commercial support is available at +nginx.com.

+ +

Thank you for using nginx.

+ + diff --git a/index.xhtml b/index.xhtml old mode 100755 new mode 100644 index 529a43a..f66ea99 --- a/index.xhtml +++ b/index.xhtml @@ -17,6 +17,8 @@ + + - - - -
-
-

Test Content

-
-

Summary

-
-
To make an apple pie from scratch, you must first invent the universe.
-
~ Carl Sagan
-
-

NOTICE: I have some real content located at Beyond Operating Systems, though it is heavily WIP. -

-

-This is some test content. The purpose of this content is to demonstrate the readability of small-width, large-margined, well-spaced, large text. A lot of websites are extremely compact, -but this often comes at the cost of readability. My goal with this website is to provide an entirely different and more attractive user experience. -

-

Lorem Ipsum

-

Pellentesque habitant morbi tristique senectus et netus et malesuada fames ac turpis egestas. Vestibulum tortor quam, feugiat vitae, ultricies eget, tempor sit amet, ante. Donec eu libero sit amet quam egestas semper. Aenean ultricies mi vitae est. Mauris placerat eleifend leo. Quisque sit amet est et sapien ullamcorper pharetra. Vestibulum erat wisi, condimentum sed, commodo vitae, ornare sit amet, wisi. Aenean fermentum, elit eget tincidunt condimentum, eros ipsum rutrum orci, sagittis tempus lacus enim ac dui. Donec non enim in turpis pulvinar facilisis. Ut felis. Praesent dapibus, neque id cursus faucibus, tortor neque egestas augue, eu vulputate magna eros eu erat. Aliquam erat volutpat. Nam dui mi, tincidunt quis, accumsan porttitor, facilisis luctus, metus

-

Extra Content

-

I have a bunch of content already, but I think a third paragraph would be nice too. In addition, this lets me intract with the website through scrolling, which could possibly make a difference, though I can't say for sure. Anyway, this content should help with both of these issues.

-
- - - diff --git a/resume.xhtml b/index.xhtml~ similarity index 60% rename from resume.xhtml rename to index.xhtml~ index 04ff21d..3e5e9fc 100644 --- a/resume.xhtml +++ b/index.xhtml~ @@ -6,14 +6,17 @@ + + - Lijero - Projects + Lijero + + + + +
+

Community

+

Github: swittepl/switti

+

Feel free to open an issue or pull request!

+

IRC Channel: #switte IRC on freenode.net

+ +
+ + diff --git a/switte/community/index.xhtml~ b/switte/community/index.xhtml~ new file mode 100644 index 0000000..d0e2909 --- /dev/null +++ b/switte/community/index.xhtml~ @@ -0,0 +1,62 @@ + + + + + + + Community - The Switte Programming Language + + + + + + + + + + +
+

+ Switte is a programming language that uses simplicity to its advantage. +

+

+This is an example program.
+> ABC = mulI1 parI1 [ lemI ] map2
+      
+

Features

+

Instant Relocation

+ Save a program mid-execution and restart it anywhere. +

Flat Model

+ Arbitrary partitions of a program are equivalent. + Programs can safely be run out-of-order and trivially parallelized! +

Partial Evaluation

+ Don't have the whole set of data all at once? That's fine! +

Complete Safety

+ There's no such thing as a runtime error. +
+ + diff --git a/switte/documentation/index.xhtml b/switte/documentation/index.xhtml new file mode 100644 index 0000000..628b243 --- /dev/null +++ b/switte/documentation/index.xhtml @@ -0,0 +1,47 @@ + + + + + + + The Switte Programming Language + + + + + + + + + + +
+

Documentation

+

Switte's main documentation is the specification/manual generated from the literate program Switti. Check out the downloads.

+
+ + diff --git a/switte/documentation/index.xhtml~ b/switte/documentation/index.xhtml~ new file mode 100644 index 0000000..fd992b3 --- /dev/null +++ b/switte/documentation/index.xhtml~ @@ -0,0 +1,46 @@ + + + + + + + The Switte Programming Language + + + + + + + + + + +
+

Documentation

+
+ + diff --git a/switte/download/index.xhtml b/switte/download/index.xhtml new file mode 100644 index 0000000..10ea470 --- /dev/null +++ b/switte/download/index.xhtml @@ -0,0 +1,49 @@ + + + + + + + Downloads -- The Switte Programming Language + + + + + + + + + + +
+

Downloads

+

+ There are currently no releases and no build server. However, you can build the latest code using the instructions on GitHub. +

+
+ + diff --git a/switte/download/index.xhtml~ b/switte/download/index.xhtml~ new file mode 100644 index 0000000..2c3b919 --- /dev/null +++ b/switte/download/index.xhtml~ @@ -0,0 +1,62 @@ + + + + + + + The Switte Programming Language -- Downloads + + + + + + + + + + +
+

+ Switte is a programming language that uses simplicity to its advantage. +

+

+This is an example program.
+> ABC = mulI1 parI1 [ lemI ] map2
+      
+

Features

+

Instant Relocation

+ Save a program mid-execution and restart it anywhere. +

Flat Model

+ Arbitrary partitions of a program are equivalent. + Programs can safely be run out-of-order and trivially parallelized! +

Partial Evaluation

+ Don't have the whole set of data all at once? That's fine! +

Complete Safety

+ There's no such thing as a runtime error. +
+ + diff --git a/switte/index.xhtml b/switte/index.xhtml new file mode 100644 index 0000000..3219872 --- /dev/null +++ b/switte/index.xhtml @@ -0,0 +1,64 @@ + + + + + + + The Switte Programming Language + + + + + + + + + + +
+

+ Switte is a programming language that uses simplicity to its advantage. +

+

+This is an example program.
+> ABC = mulI1 parI1 [ mulE1 ] [ lemI ] map
+      
+

Features

+

Instant Relocation

+

Save a program mid-execution and restart it anywhere.

+

Flat Model

+

+ Arbitrary partitions of a program are equivalent. + Programs can safely be run out-of-order and trivially parallelized! +

+

Partial Evaluation

+

Don't have the whole set of data all at once? That's fine!

+

Complete Safety

+

There's no such thing as a runtime error.

+
+ + diff --git a/switte/index.xhtml~ b/switte/index.xhtml~ new file mode 100644 index 0000000..87815c3 --- /dev/null +++ b/switte/index.xhtml~ @@ -0,0 +1,14 @@ + + + + + + + The Switte Programming Language + + + + + + + diff --git a/switteprim.txt b/switteprim.txt deleted file mode 100644 index 1f99f9c..0000000 --- a/switteprim.txt +++ /dev/null @@ -1,296 +0,0 @@ -INTRODUCTION -================= - -This system is derived from linear type theory, such that for a value x with type A: - -(x : A) means that x must be used exactly once. -(x : ?A) means that x may be used at most once, or forgotten. -(x : !A) means that x may be used multiple times, but at least once. -(x : ?!A) means that x may be used zero or more times. - -? is read "why not", and ! is read "of course". - -Additive means "there are two possibilities, but you only get one". In other words, - in each branch of an additive relationship, all of the non-? variables in the - context must be disposed of. -Multiplicative means "you get both of these at once". In other words, all of the - non-? variables must be disposed of as a whole. -Conjunction means "both values exist at once". -Disjunction means "only one of these values exist at once". - -Additive conjunction (a & b) means "you can use either a or b, it's up to you which". -Additive disjunction (a + b) means "I am going to give you an a or a b, and you must - be prepared to deal with either, but not both at the same time." -Multiplicative conjunction (a * b) means "I am going to give you an a and a b, and - you have to use both". -Multiplicative disjunction (a | b) means "you must be prepared to recieve both an a - and a b, but ultimately only one will be used". If that's a bit confusing, that's alright. - - -AXIOM -===== - -a | -a - -where -a represents the dual of a, and | represents multiplicative disjunction. -This is the axiom of the excluded middle, meaning to "a is true or not true". -Its implementation is explained in "multiplicative disjunction". There are no -further axioms. - - -Duals: - -While they're a general construct, the dual is actually only used for the -definition of the axiom, so I'll specify it here: - - -(-a) = a - -(a & b) = -a + -b - -(a * b) = -a | -b - -Top = 0 - -1 = Bottom - -(!a) = ?(-a) - -and vice versa in all cases. - - -Distributivity: - -This isn't actually related to the axiom, but I might as well while I'm at it. -None of these are built-in rules, they are just helpful notions provable within -the language itself. These are the distributive laws: - - a * (b + c) = (a * b) + (a * c) - a | (b & c) = (a | b) & (a | c) - a * 0 = 0 - a & Top = Top - -notice how "exponentials" convert "addition" into "multiplication"-- hence the names: - - !(a & b) = !a * !b - ?(a + b) = ?a | ?b - !Top = 1 - ?0 = Bottom - -and vice versa in all cases in both sets listed. - - -STRUCTURAL RULES -================ - -The weakening and contraction rules are specified twice since -I hope to extend this into an ordered logic, so I have to include -the exchange-requiring equivalents. This is also true for constructors and -eliminators in the computational rules. All of these can be implemented only -once, with the second rule being implemented in terms of the first via a swap -that /doesn't/ apply an unordered constraint. If I do not extend this to an -ordered logic, you can simply ignore all of the second rules. - -Weakening: a -> ?b -> a ### const - ?a -> b -> b ### flip const -Contraction: (a -> !a -> b) -> (!a -> b) ### join - (!a -> a -> b) -> (!a -> b) ### join . flip -Exchange: (a -> b -> c) -> (b -> a -> c) ### flip - (or equivalently (a * b) -> (b * a)) - -(todo: do I need a dual exchange rule for &?) - - -COMPUTATIONAL RULES -=================== - -These are the introduction and elimination rules for compound terms in the -language. Introduction terms are labeled with an i, elimination terms with an e. -When multiple introduction/elimination rules are required, they are each labelled -with a number. - - -Additive conjunction (with): - -Topi1 : a -> a & Top -Topi2 : a -> Top & a -additively (all of the context is used in either branch of the options a and b), - &i : a -> b -> a & b -&e1 : a & b -> a -&e2 : a & b -> b - -The eliminators may be called "projections". - -There is no elimination rule for Top, since it doesn't exist. It can only -be introducted in terms of the type signatures in Top1 and Top2, and the -only thing you can do with it is ignore it via the associated projection. -It doesn't have to exist because nothing using the projection for Top -can exist because it cannot be used for anything, nor weakened. - -Additive conjunction can be interpreted as a pair of procedure pointers, -with the context enclosed inside. The projections are function calls that -choose which procedure to invoke to consume the context. A call to Top -is equivalent to nontermination, since Top doesn't exist. - -The compiler could probably safely maybe automatically insert the -eliminators when the corresponding type is required, but I haven't totally -thought this through, and there are potential issues with e.g. -(a & b) & a, in the event that choice of a matters. - - -Additive disjunction (plus): - -0e : 0 -> a -+i1 : a -> a + b -+i2 : b -> a + b -+e1 : (a -> c) & (b -> c) -> (a + b -> c) -+e2 : (b -> c) & (a -> c) -> (a + b -> c) - -0, like Top, does not exist. It only exists as a fragment of the type -signature in disjunction introduction, because - +i1 (x : A) : forall b. A + b -and 0 is simply an instanciation of b, taking the entire forall thing -far too literally, as shown in its eliminator. It is impossible to -have an actual instance of 0, because by definition, it is the type -not selected when you create a +. In terms of elimination, all eliminations -of an 0 + a type are equivalent to (e1 0e), with the exception of the fact -that you have to destroy anything in your context that cannot be weakened. -This can be trivially implemented as using 0e to generate a function type -that directly consumes all of the terms in the context, and returning a c. - -The eliminators are rather intuitive. The choice of & is necessary because -each option must entirely dispose of its context, as is in the definition -of &i. - -A value of a + b is trivial: it's just a tagged union, i.e. an indicator -of whether it's a or b, and the value a or b. - - -Multiplicative conjunction (times): - -1i : 1 -1e1 : 1 * a -> a -1e2 : a * a -> a -*i : a -> b -> a * b -*e1 : a * b -> (a -> b -> c) -> c -*e2 : (a -> b -> c) -> a * b -> c - -1 does exist, and in fact its sole introduction rule is "1 exists", but -it holds no information, so in practice it doesn't have to be stored. -Formally it has only one elimination rule, 1 -> , but a function -can't return nothing, so instead you have to eliminate it via its definition -as the identity element. - -The rest of it is pretty intuitive. In memory, it's just a pair of values. - - -Multiplicative disjunction (par): - -Bottomi1 : a -> a | Bottom -Bottomi2 : a -> Bottom | a -Bottome : Bottom -> a -|i : ?a * ?b -> a | b -|e1 : ?(a -> c) * ?(b -> c) -> a | b -> c -|e2 : a | b -> ?(a -> c) * ?(b -> c) -> c - -Multiplicative disjunction is by far the least intuitive operator, corresponding -to delimited continuations and functions. - -Bottom does not exist. Like with additive disjunction, it's just part of -the type signature, and it simply exists to not get chosen. - -As you can see, |e produces only one c despite having two options, both -of which are provided in |i. That's because |e is inherently -nondeterministic. |i and |e are both entirely weakened because neither -choice will necessarily be chosen, yet they both must exist in full. This means -the entire closure that they are in must be somehow weakened, since it's -not possible to dispose of a full value in both branches at once, because -that would constitute double use. The use of the * in the signature is to -emphasize the relationship to multiplicative conjunction and that behavior, -just as additive disjunction is defined in terms of additive conjunction. - -A par value is implemented in terms of a pair of function calls. In the -eliminator, either function may be invoked, and when the value is used, -the eliminator may choose to forget the function halfway through and jump -to the other option instead, making it equivalent to a continuation, the -functional equivalent to a goto. - -The main use of par is in the sole axiom of the excluded middle, which encodes -the equivalent to functions and delimited continuations. For reference, the type -of a function is defined as (a -> b = -a | b). Essentially, the axiom can't pull -a value of type A out of thin air, so it is forced to choose the -a branch. --a is equivalent to a -> Bottom, and therefore can only be used by providing a -value of type a, so instead of actually providing that Bottom, the function -simply jumps to the branch that demanded an a in the first place and returns -that result. Functions are implemented the same way, only kinda in reverse: -it runs the side that claims to have an extra a it needs to dispose of, and then -once it gets that hole, it produces the a in the other branch and terminates. -Implicitly in either case the context is consumed, which is equivalent to the -function returning. As you can see though, these are simply different -perspectives on the same axiom. - -If you don't get it, that's fine, because it's super confusing, and I don't -understand it either. - - -TYPECLASSES & MODULES -===================== - -I haven't really figured out typeclasses. Basically I intend for them to be -existential types (via dependent sums maybe?), but I need to figure out how to -make it practical first. - -Requirements: -* It must be possible to use the instanced values without having to unwrap the -class implementation manually first. -* It must be possible to use the function instances equally easily, and without -having to declare the names at each usage point. -* They must permit non-unique instances. -* Instances must be stackable, the outermost one being used unless the inner one -is exposed within one of the instance type-contextual manipulations (e.g. functor). -This is important especially since Copy should implement !, but it should be -possible to rc a copiable variable and have it behave as expected. Copy is -described in more detail in Mutation. - -Modules are essentially the same thing as typeclasses, and have similar -requirements. The answer to both probably lies in the same place. Lightweight -modules that are as easy to use as typeclasses would be a major plus too. - -I don't suspect this problem will be all that hard to resolve. - - -As far as ! and ? go, they are essentially typeclasses. @ below is a typeclass -too. Essentially, these are the introduction rules for ! and ?, though these -are not possible to derive without external functions (for obvious reasons), -and being typeclasses, it's not appropriate to give them introduction rules. - -To specify a !, you need a class instance satisfying either the contraction rule, -or equivalently, the following: - !i : a -> !(() -> a) - -To specify a ?, you need a class instance satisfying either the weakening rule, -or equivalently, the following: - ?i : a -> ?(a -> ()) - -The reason these are essentially introduction rules is made more obvious if you -wrap the entire thing into a (-> !a) or (-> ?a) respectively. - - -MUTATION -======== - -@ means "mutable" to the compiler. Normal values are immutable, which means to -make changes, the entire object has to be deconstructed and reconstructed again. -Mutation allows direct modification to the data structure without concern. -Generally things are mutable by default, but this may not be the case with -external values, or internal values via pointers. - -@ extends ?, which means a mutable pointer may be freed, but not !, so that -you cannot have multiple instances of a mutable pointer. - -Losing mutation in favor of replication: - rc : @a -> !?a, introducing a reference-counted pointer - -Some values may implement Copy, allowing deep duplication. All composite -structures may automatically derive Copy whenever both of their constituents -are !. Copy, of course, is an implementation of !, though using (rc) anyway -may be desirable for performance reasons, especially for highly composite -structures. - copy : Copy a => a -> @a - -Generally, syntactic sugar should eliminate most of this crap such that the -programmer shouldn't have to deal with it most of the time. diff --git a/swittestorm.txt b/swittestorm.txt new file mode 100644 index 0000000..c831804 --- /dev/null +++ b/swittestorm.txt @@ -0,0 +1,27 @@ +* Highly parallel processing-- distribute code over CPU cores, GPU cores, or computers on a network-- completely transparently. +* Free networked programs-- location-agnostic code can be run anywhere or everywhere, and multi-user systems via a multi-entry point program. +* Streamable computations-- computation can begin even when not all of the data has arrived yet. +* High performance-- simple and fast computations on CPU or GPU, easy to compile. +* Hard real-time, strict determinism-- extremely simple program semantics and implementation allow making very strong guarantees, including precise computation times and results. +* Secure-- deterministic and provable program behavior allows trivially creating secure subsets of the language. +* High-quality debugging tools-- the compiler is aware of a large amount of metadata for generating messages, and program images allow easy reproducibility/analysis. +* Portability-- no programming language behavior is platform-dependent, so only calls to native libraries are platform-dependent. +* Provable correctness-- everything else not good enough for you? prove your programs correct using the formal semantics. +* Low- or high-level-- the programming language depends on virtually no runtime (just malloc), not even a garbage collector, and would be suitable for writing OS-level code wherever C would be. In fact, it could be feasible to implement Switte as a CPU instruction set. +* High-quality type inference-- strongly and statically typed, but you'd never know it. +* Code inference-- built-in proof helper can automatically infer your code based on its type. +* Highly granular versioning-- hash and graph-based version control applies not just to libraries, but individual functions and types. This could theoretically be used for a completely decentralized/distributed package manager. +* High-quality documentation-- every aspect is formally specified and explained for academics, developers, or laymen, such that anyone can learn it. +* Extreme simplicity-- a very small yet complete set of primitives and virtually no syntax makes learning or porting it easy. +* Extensibility-- if you don't appreciate the simplicity, you can simply encode your favorite syntax or language structures directly into the language. +* Mathematician- and logician-suitable-- basis in a very general type theory makes it suitable for mathematicians (though unfortunately it is not Homotopy-based) +* Linguist-suitable-- the Lambek fragment is suitable for describing categorical grammars +* Staticstics interpretation-- the non-associative fragment corresponds with concepts in statistics. +* Programmer-suitable-- if you don't care about all that stuff, the full fragment is Turing-complete and a suitable programming language. +* Native literate programming support +* Rich code editing-- textual source code is possible but unneccessary. Compiled code and source code are essentially equivalent, so between the instruction set and hash-graph VCS, it is possible to interact with code entirely non-textually. +* Eager or lazy?-- irrelevant! Switte's linear semantics make evaluation strategy definitionally irrelevant. In other words, Switte is strongly normalizing (with a few exceptions related to the full fragment). +* Stateless-- dump a running program and restart it anywhere. You might even argue that all Switte programs are always running, in a sense. +* Flatness-- Switte has no syntax, and therefore arbitrary partitions of programs are equivalent. +* Simplicity-- a very small set of primitives and no syntax. +* Derived from type theory-- clear formal semantics makes it easy for both humans and compilers to reason about programs.