initial commit

This commit is contained in:
Loki Verloren
2021-05-03 10:43:10 +02:00
commit 0e2bba237a
1289 changed files with 349874 additions and 0 deletions

2
.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
pod
.idea

1
CNAME Normal file
View File

@@ -0,0 +1 @@
pod.parallelcoin.io

56
CONTRIBUTION.md Normal file
View File

@@ -0,0 +1,56 @@
### Contribution Guidelines
Loki, the primary author of this code has somewhat unconventional ideas about everything including Go, developed out of the process of building this application and the use of compact logging syntax with visual recognisability as a key debugging technique for mostly multi-threaded code, and so, tools have been developed and conventions are found throughout the code and increasingly as all the dusty parts get looked at again.
So, there's a few things that you should do, which Loki thinks you will anyway realise that should be the default anyway. He will stop speaking in the third person now.
1. e is error
being a verb, `err` is a stolen name. But it also is 3 times as long. It is nearly universal among programmers that i, usually also j and k are iterator variables, and commonly also x, y, z, most cogently relevant when they are coordinates. So use e, not err
2. Use the logger, E.Chk for serious conditions or while debugging, non-blocking failures use D.Chk
It's not difficult, literally just copy any log.go file out of this repo, change the package name and then you can use the logger, which includes these handy check functions that some certain Go Authors use in several of their lectures and texts, and actually probably the origin of the whole idea of making the error type a pointer to string, which led to the Wrap/Unwrap interface which I don't use.
4. Use if statements with ALL calls that return errors or booleans
And declare their variables with var statements, unless there is no more uses of the `e` again before all paths of execution return.
5. Prefer to return early
The `.Chk` functions return true if not nil, and so without the `!` in front, the most common case, the content of the following `{}` are the error handling code.
In general, and especially when the process is not idempotent (changed order breaks the process), which will be most of the time with several processes in a sequence, especially in Gio code which is naturally a bit wide if you write it to make it easily sliced, you want to keep the success line of execution as far left as possible.
Sometimes the negative condition is ignored, as there is a retry or it is not critical, and for these cases use `!E.Chk` and put the success path inside its if block.
6. In the Gio code of the wallet, take advantage of the fact that you can break a line after the dot `.` operator and as you will see amply throughout the code, as it allows items to be repositioned and added with minimal fuss.
Deep levels of closures and stacked variables of any kind tend to lead quickly to a lot of nasty red lines in one's IDE. The fluent method chaining pattern is used because it is far more concise than the raw type definition followed by closure attached by a dot operator, but since it would be valid and pass `gofmt` unchanged to put the whole chain in there (assuming it somehow had no anonymous functions in it).
7. Use the logger
This is being partly repeated as it's very important. Regardless of programmer opinions about whether a debugger is a better tool than a log viewer, note that while it is not fully implemented, `pod` already contains a protocol to aggregate child process logs invisibly from the terminal through an IPC, and logging is one means to enabling auditability of code.
So long as logs concern themselves primarily with metadata information and only expose data in `trace` level (with the `T` error type) and put the really heavy stuff like printing tree walks over thousands of nodes or other similarly very short operations, put them inside closures with `T.C(func()string{})`
8. `gofmt` sort the imports, and avoid whenever possible a package name different from a folder, unless you can't avoid an import name conflict. A notable example is uber/atomic.
9. 120 characters wide
We are living in the 21st century. It was already common as this century started that monitors were big enough to show 120 or more characters wide, 80 characters is just too narrow.
Well, it would be even better if I figured out a way to make the Gio code not stack out to the right so deep but it's comfortable in 120 unless I have put too many things into one closure.
10. Always put doc comments on exported functions. Put them on struct fields and variables also in the case that detailed information isn't already available elsewhere related to the item. Keep the doc comments to one line unless you really need to explain something that needs to be visible in the documentation. For the most part this means libraries and for the most part a lot of that is in independent repositories.
11. Avoid pointers for other than structs and arrays (fixed length). Pointers require mutexes with concurrent code, which is the rule rather than the exception, and it is incredibly easy to put a lock inside a downstream function that has just locked it and the application freezes.
Instead, as a default option, use atomics and if necessary further processing or hook-calling from getters or setters. If it's a big struct, so much the better.
Atomics are perfectly suited for independent threads with narrow responsibilities, and they also don't lock out threads from accessing other fields of the struct concurrently, which can become a bottleneck that invisibly creeps up on you.
12. `if e = function(); E.Chk(e){}` is the prototype of how all functions handling errors should be invoked. It is more compact and doesn't noise up the call so much as the alternative fills up your screen.
13. When considering whether to use an interface, ask whether a generator might be easier (and perhaps faster). The generics-loving vocal usually sub-2 years programmers who came from generics abusing languages like c#, java or c++, can just go away, Go14life. Automation > abbreviation.

30
Dockerfile Executable file
View File

@@ -0,0 +1,30 @@
# TODO: write the build for minimal Go environment and headless build
FROM golang:1.14 as builder
WORKDIR /pod
COPY .git /pod/.git
COPY app /pod/app
COPY cmd /pod/cmd
COPY pkg /pod/pkg
COPY stroy /pod/stroy
COPY version /pod/version
COPY go.??? /pod/
COPY ./*.go /pod/
RUN ls
ENV GOBIN "/bin"
ENV PATH "$GOBIN:$PATH"
RUN cd /pod && go install ./cmd/build/.
RUN cd /pod && build build
RUN cd /pod && stroy docker
RUN stroy teststopnode
FROM ubuntu:20.04
ENV GOBIN "/bin"
ENV PATH "$GOBIN:$PATH"
#RUN /usr/bin/which sh
COPY --from=builder /bin/stroy /bin/
COPY --from=builder /bin/pod /bin/
#COPY --from=builder /usr/bin/sh /bin/
#RUN echo $PATH && /bin/stroy
RUN /bin/pod version
EXPOSE 11048 11047 21048 21047
CMD ["tail", "-f", "/dev/null"]
#CMD /usr/local/bin/parallelcoind -txindex -debug -debugnet -rpcuser=user -rpcpassword=pa55word -connect=127.0.0.1:11047 -connect=seed1.parallelcoin.info -bind=127.0.0.1 -port=11147 -rpcport=11148

24
LICENSE Normal file
View File

@@ -0,0 +1,24 @@
This is free and unencumbered software released into the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or
distribute this software, either in source code form or as a compiled
binary, for any purpose, commercial or non-commercial, and by any
means.
In jurisdictions that recognize copyright laws, the author or authors
of this software dedicate any and all copyright interest in the
software to the public domain. We make this dedication for the benefit
of the public at large and to the detriment of our heirs and
successors. We intend this dedication to be an overt act of
relinquishment in perpetuity of all present and future rights to this
software under copyright law.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
For more information, please refer to <https://unlicense.org>

1
README.md Normal file
View File

@@ -0,0 +1 @@
p9

380
cmd/ctl/ctl.go Normal file
View File

@@ -0,0 +1,380 @@
package ctl
import (
"bufio"
"bytes"
"context"
"crypto/tls"
"crypto/x509"
"encoding/json"
"errors"
"fmt"
"io"
"io/ioutil"
"net"
"net/http"
"os"
"strings"
"github.com/btcsuite/go-socks/socks"
"github.com/p9c/p9/pkg/btcjson"
"github.com/p9c/p9/pod/config"
)
// Call uses settings in the context to call the method with the given parameters and returns the raw json bytes
func Call(
cx *config.Config, wallet bool, method string, params ...interface{},
) (result []byte, e error) {
// Ensure the specified method identifies a valid registered command and is one of the usable types.
var usageFlags btcjson.UsageFlag
usageFlags, e = btcjson.MethodUsageFlags(method)
if e != nil {
e = errors.New("Unrecognized command '" + method + "' : " + e.Error())
// HelpPrint()
return
}
if usageFlags&btcjson.UnusableFlags != 0 {
E.F("The '%s' command can only be used via websockets\n", method)
// HelpPrint()
return
}
// Attempt to create the appropriate command using the arguments provided by the user.
var cmd interface{}
cmd, e = btcjson.NewCmd(method, params...)
if e != nil {
// Show the error along with its error code when it's a json. BTCJSONError as it realistically will always be
// since the NewCmd function is only supposed to return errors of that type.
if jerr, ok := e.(btcjson.GeneralError); ok {
errText := fmt.Sprintf("%s command: %v (code: %s)\n", method, e, jerr.ErrorCode)
e = errors.New(errText)
// CommandUsage(method)
return
}
// The error is not a json.BTCJSONError and this really should not happen. Nevertheless fall back to just
// showing the error if it should happen due to a bug in the package.
errText := fmt.Sprintf("%s command: %v\n", method, e)
e = errors.New(errText)
// CommandUsage(method)
return
}
// Marshal the command into a JSON-RPC byte slice in preparation for sending it to the RPC server.
var marshalledJSON []byte
marshalledJSON, e = btcjson.MarshalCmd(1, cmd)
if e != nil {
return
}
// Send the JSON-RPC request to the server using the user-specified connection configuration.
result, e = sendPostRequest(marshalledJSON, cx, wallet)
if e != nil {
return
}
return
}
// newHTTPClient returns a new HTTP client that is configured according to the proxy and TLS settings in the associated
// connection configuration.
func newHTTPClient(cfg *config.Config) (*http.Client, func(), error) {
var dial func(ctx context.Context, network string,
addr string) (net.Conn, error)
ctx, cancel := context.WithCancel(context.Background())
// Configure proxy if needed.
if cfg.ProxyAddress.V() != "" {
proxy := &socks.Proxy{
Addr: cfg.ProxyAddress.V(),
Username: cfg.ProxyUser.V(),
Password: cfg.ProxyPass.V(),
}
dial = func(_ context.Context, network string, addr string) (
net.Conn, error,
) {
c, e := proxy.Dial(network, addr)
if e != nil {
return nil, e
}
go func() {
out:
for {
select {
case <-ctx.Done():
if e := c.Close(); E.Chk(e) {
}
break out
}
}
}()
return c, nil
}
}
// Configure TLS if needed.
var tlsConfig *tls.Config
if cfg.ClientTLS.True() && cfg.RPCCert.V() != "" {
pem, e := ioutil.ReadFile(cfg.RPCCert.V())
if e != nil {
cancel()
return nil, nil, e
}
pool := x509.NewCertPool()
pool.AppendCertsFromPEM(pem)
tlsConfig = &tls.Config{
RootCAs: pool,
InsecureSkipVerify: cfg.TLSSkipVerify.True(),
}
}
// Create and return the new HTTP client potentially configured with a proxy and TLS.
client := http.Client{
Transport: &http.Transport{
Proxy: nil,
DialContext: dial,
TLSClientConfig: tlsConfig,
TLSHandshakeTimeout: 0,
DisableKeepAlives: false,
DisableCompression: false,
MaxIdleConns: 0,
MaxIdleConnsPerHost: 0,
MaxConnsPerHost: 0,
IdleConnTimeout: 0,
ResponseHeaderTimeout: 0,
ExpectContinueTimeout: 0,
TLSNextProto: nil,
ProxyConnectHeader: nil,
MaxResponseHeaderBytes: 0,
WriteBufferSize: 0,
ReadBufferSize: 0,
ForceAttemptHTTP2: false,
},
}
return &client, cancel, nil
}
// sendPostRequest sends the marshalled JSON-RPC command using HTTP-POST mode to the server described in the passed
// config struct. It also attempts to unmarshal the response as a JSON-RPC response and returns either the result field
// or the error field depending on whether or not there is an error.
func sendPostRequest(
marshalledJSON []byte, cx *config.Config, wallet bool,
) ([]byte, error) {
// Generate a request to the configured RPC server.
protocol := "http"
if cx.ClientTLS.True() {
protocol = "https"
}
serverAddr := cx.RPCConnect.V()
if wallet {
serverAddr = cx.WalletServer.V()
_, _ = fmt.Fprintln(os.Stderr, "using wallet server", serverAddr)
}
url := protocol + "://" + serverAddr
bodyReader := bytes.NewReader(marshalledJSON)
httpRequest, e := http.NewRequest("POST", url, bodyReader)
if e != nil {
return nil, e
}
httpRequest.Close = true
httpRequest.Header.Set("Content-Type", "application/json")
// Configure basic access authorization.
httpRequest.SetBasicAuth(cx.Username.V(), cx.Password.V())
// T.Ln(cx.Username.V(), cx.Password.V())
// Create the new HTTP client that is configured according to the user - specified options and submit the request.
var httpClient *http.Client
var cancel func()
httpClient, cancel, e = newHTTPClient(cx)
if e != nil {
return nil, e
}
httpResponse, e := httpClient.Do(httpRequest)
if e != nil {
return nil, e
}
// close connection
cancel()
// Read the raw bytes and close the response.
respBytes, e := ioutil.ReadAll(httpResponse.Body)
if e := httpResponse.Body.Close(); E.Chk(e) {
e = fmt.Errorf("error reading json reply: %v", e)
return nil, e
}
// Handle unsuccessful HTTP responses
if httpResponse.StatusCode < 200 || httpResponse.StatusCode >= 300 {
// Generate a standard error to return if the server body is empty. This should not happen very often, but it's
// better than showing nothing in case the target server has a poor implementation.
if len(respBytes) == 0 {
return nil, fmt.Errorf("%d %s", httpResponse.StatusCode,
http.StatusText(httpResponse.StatusCode),
)
}
return nil, fmt.Errorf("%s", respBytes)
}
// Unmarshal the response.
var resp btcjson.Response
if e := json.Unmarshal(respBytes, &resp); E.Chk(e) {
return nil, e
}
if resp.Error != nil {
return nil, resp.Error
}
return resp.Result, nil
}
// ListCommands categorizes and lists all of the usable commands along with their one-line usage.
func ListCommands() (s string) {
const (
categoryChain uint8 = iota
categoryWallet
numCategories
)
// Get a list of registered commands and categorize and filter them.
cmdMethods := btcjson.RegisteredCmdMethods()
categorized := make([][]string, numCategories)
for _, method := range cmdMethods {
var e error
var flags btcjson.UsageFlag
if flags, e = btcjson.MethodUsageFlags(method); E.Chk(e) {
continue
}
// Skip the commands that aren't usable from this utility.
if flags&btcjson.UnusableFlags != 0 {
continue
}
var usage string
if usage, e = btcjson.MethodUsageText(method); E.Chk(e) {
continue
}
// Categorize the command based on the usage flags.
category := categoryChain
if flags&btcjson.UFWalletOnly != 0 {
category = categoryWallet
}
categorized[category] = append(categorized[category], usage)
}
// Display the command according to their categories.
categoryTitles := make([]string, numCategories)
categoryTitles[categoryChain] = "Chain Server Commands:"
categoryTitles[categoryWallet] = "Wallet Server Commands (--wallet):"
for category := uint8(0); category < numCategories; category++ {
s += categoryTitles[category]
s += "\n"
for _, usage := range categorized[category] {
s += "\t" + usage + "\n"
}
s += "\n"
}
return
}
// HelpPrint is the uninitialized help print function
var HelpPrint = func() {
fmt.Println("help has not been overridden")
}
// CtlMain is the entry point for the pod.Ctl component
func CtlMain(cx *config.Config) {
args := cx.ExtraArgs
if len(args) < 1 {
ListCommands()
os.Exit(1)
}
// Ensure the specified method identifies a valid registered command and is one of the usable types.
method := args[0]
var usageFlags btcjson.UsageFlag
var e error
if usageFlags, e = btcjson.MethodUsageFlags(method); E.Chk(e) {
_, _ = fmt.Fprintf(os.Stderr, "Unrecognized command '%s'\n", method)
HelpPrint()
os.Exit(1)
}
if usageFlags&btcjson.UnusableFlags != 0 {
_, _ = fmt.Fprintf(os.Stderr, "The '%s' command can only be used via websockets\n", method)
HelpPrint()
os.Exit(1)
}
// Convert remaining command line args to a slice of interface values to be passed along as parameters to new
// command creation function. Since some commands, such as submitblock, can involve data which is too large for the
// Operating System to allow as a normal command line parameter, support using '-' as an argument to allow the
// argument to be read from a stdin pipe.
bio := bufio.NewReader(os.Stdin)
params := make([]interface{}, 0, len(args[1:]))
for _, arg := range args[1:] {
if arg == "-" {
var param string
if param, e = bio.ReadString('\n'); E.Chk(e) && e != io.EOF {
_, _ = fmt.Fprintf(os.Stderr, "Failed to read data from stdin: %v\n", e)
os.Exit(1)
}
if e == io.EOF && len(param) == 0 {
_, _ = fmt.Fprintln(os.Stderr, "Not enough lines provided on stdin")
os.Exit(1)
}
param = strings.TrimRight(param, "\r\n")
params = append(params, param)
continue
}
params = append(params, arg)
}
var result []byte
if result, e = Call(cx, cx.UseWallet.True(), method, params...); E.Chk(e) {
return
}
// // Attempt to create the appropriate command using the arguments provided by the user.
// cmd, e := btcjson.NewCmd(method, params...)
// if e != nil {
// E.Ln(e)
// // Show the error along with its error code when it's a json. BTCJSONError as it realistically will always be
// // since the NewCmd function is only supposed to return errors of that type.
// if jerr, ok := err.(btcjson.BTCJSONError); ok {
// fmt.Fprintf(os.Stderr, "%s command: %v (code: %s)\n", method, e, jerr.ErrorCode)
// CommandUsage(method)
// os.Exit(1)
// }
// // The error is not a json.BTCJSONError and this really should not happen. Nevertheless fall back to just
// // showing the error if it should happen due to a bug in the package.
// fmt.Fprintf(os.Stderr, "%s command: %v\n", method, e)
// CommandUsage(method)
// os.Exit(1)
// }
// // Marshal the command into a JSON-RPC byte slice in preparation for sending it to the RPC server.
// marshalledJSON, e := btcjson.MarshalCmd(1, cmd)
// if e != nil {
// E.Ln(e)
// fmt.Println(e)
// os.Exit(1)
// }
// // Send the JSON-RPC request to the server using the user-specified connection configuration.
// result, e := sendPostRequest(marshalledJSON, cx)
// if e != nil {
// E.Ln(e)
// os.Exit(1)
// }
// Choose how to display the result based on its type.
strResult := string(result)
switch {
case strings.HasPrefix(strResult, "{") || strings.HasPrefix(strResult, "["):
var dst bytes.Buffer
if e = json.Indent(&dst, result, "", " "); E.Chk(e) {
fmt.Printf("Failed to format result: %v", e)
os.Exit(1)
}
fmt.Println(dst.String())
case strings.HasPrefix(strResult, `"`):
var str string
if e = json.Unmarshal(result, &str); E.Chk(e) {
_, _ = fmt.Fprintf(os.Stderr, "Failed to unmarshal result: %v", e)
os.Exit(1)
}
fmt.Println(str)
case strResult != "null":
fmt.Println(strResult)
}
}
// CommandUsage display the usage for a specific command.
func CommandUsage(method string) {
var usage string
var e error
if usage, e = btcjson.MethodUsageText(method); E.Chk(e) {
// This should never happen since the method was already checked before calling this function, but be safe.
fmt.Println("Failed to obtain command usage:", e)
return
}
fmt.Println("Usage:")
fmt.Printf(" %s\n", usage)
}

11
cmd/ctl/log.go Normal file
View File

@@ -0,0 +1,11 @@
package ctl
import (
"github.com/p9c/p9/pkg/log"
"github.com/p9c/p9/version"
)
var subsystem = log.AddLoggerSubsystem(version.PathBase)
var F, E, W, I, D, T log.LevelPrinter = log.GetLogPrinterSet(subsystem)

811
cmd/gui/app.go Normal file
View File

@@ -0,0 +1,811 @@
package gui
import (
"fmt"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
uberatomic "go.uber.org/atomic"
"golang.org/x/exp/shiny/materialdesign/icons"
l "github.com/p9c/p9/pkg/gel/gio/layout"
"github.com/p9c/p9/pkg/gel/gio/text"
"github.com/p9c/p9/pkg/gel"
"github.com/p9c/p9/cmd/gui/cfg"
"github.com/p9c/p9/pkg/p9icons"
)
func (wg *WalletGUI) GetAppWidget() (a *gel.App) {
a = wg.App(wg.Window.Width, uberatomic.NewString("home"), Break1).
SetMainDirection(l.W).
SetLogo(&p9icons.ParallelCoin).
SetAppTitleText("Parallelcoin Wallet")
wg.MainApp = a
wg.config = cfg.New(wg.Window, wg.quit)
wg.configs = wg.config.Config()
a.Pages(
map[string]l.Widget{
"home": wg.Page(
"home", gel.Widgets{
// p9.WidgetSize{Widget: p9.EmptyMaxHeight()},
gel.WidgetSize{Widget: wg.OverviewPage()},
},
),
"send": wg.Page(
"send", gel.Widgets{
// p9.WidgetSize{Widget: p9.EmptyMaxHeight()},
gel.WidgetSize{Widget: wg.SendPage.Fn},
},
),
"receive": wg.Page(
"receive", gel.Widgets{
// p9.WidgetSize{Widget: p9.EmptyMaxHeight()},
gel.WidgetSize{Widget: wg.ReceivePage.Fn},
},
),
"history": wg.Page(
"history", gel.Widgets{
// p9.WidgetSize{Widget: p9.EmptyMaxHeight()},
gel.WidgetSize{Widget: wg.HistoryPage()},
},
),
"settings": wg.Page(
"settings", gel.Widgets{
// p9.WidgetSize{Widget: p9.EmptyMaxHeight()},
gel.WidgetSize{
Widget: func(gtx l.Context) l.Dimensions {
return wg.configs.Widget(wg.config)(gtx)
},
},
},
),
"console": wg.Page(
"console", gel.Widgets{
// p9.WidgetSize{Widget: p9.EmptyMaxHeight()},
gel.WidgetSize{Widget: wg.console.Fn},
},
),
"help": wg.Page(
"help", gel.Widgets{
gel.WidgetSize{Widget: wg.HelpPage()},
},
),
"log": wg.Page(
"log", gel.Widgets{
gel.WidgetSize{Widget: a.Placeholder("log")},
},
),
"quit": wg.Page(
"quit", gel.Widgets{
gel.WidgetSize{
Widget: func(gtx l.Context) l.Dimensions {
return wg.VFlex().
SpaceEvenly().
AlignMiddle().
Rigid(
wg.H4("are you sure?").Color(wg.MainApp.BodyColorGet()).Alignment(text.Middle).Fn,
).
Rigid(
wg.Flex().
// SpaceEvenly().
Flexed(0.5, gel.EmptyMaxWidth()).
Rigid(
wg.Button(
wg.clickables["quit"].SetClick(
func() {
// interrupt.Request()
wg.gracefulShutdown()
// close(wg.quit)
},
),
).Color("Light").TextScale(5).Text(
"yes!!!",
).Fn,
).
Flexed(0.5, gel.EmptyMaxWidth()).
Fn,
).
Fn(gtx)
},
},
},
),
// "goroutines": wg.Page(
// "log", p9.Widgets{
// // p9.WidgetSize{Widget: p9.EmptyMaxHeight()},
//
// p9.WidgetSize{
// Widget: func(gtx l.Context) l.Dimensions {
// le := func(gtx l.Context, index int) l.Dimensions {
// return wg.State.goroutines[index](gtx)
// }
// return func(gtx l.Context) l.Dimensions {
// return wg.ButtonInset(
// 0.25,
// wg.Fill(
// "DocBg",
// wg.lists["recent"].
// Vertical().
// // Background("DocBg").Color("DocText").Active("Primary").
// Length(len(wg.State.goroutines)).
// ListElement(le).
// Fn,
// ).Fn,
// ).
// Fn(gtx)
// }(gtx)
// // wg.NodeRunCommandChan <- "stop"
// // consume.Kill(wg.Worker)
// // consume.Kill(wg.cx.StateCfg.Miner)
// // close(wg.cx.NodeKill)
// // close(wg.cx.KillAll)
// // time.Sleep(time.Second*3)
// // interrupt.Request()
// // os.Exit(0)
// // return l.Dimensions{}
// },
// },
// },
// ),
"mining": wg.Page(
"mining", gel.Widgets{
gel.WidgetSize{
Widget: a.Placeholder("mining"),
},
},
),
"explorer": wg.Page(
"explorer", gel.Widgets{
gel.WidgetSize{
Widget: a.Placeholder("explorer"),
},
},
),
},
)
a.SideBar(
[]l.Widget{
// wg.SideBarButton(" ", " ", 11),
wg.SideBarButton("home", "home", 0),
wg.SideBarButton("send", "send", 1),
wg.SideBarButton("receive", "receive", 2),
wg.SideBarButton("history", "history", 3),
// wg.SideBarButton("explorer", "explorer", 6),
// wg.SideBarButton("mining", "mining", 7),
wg.SideBarButton("console", "console", 9),
wg.SideBarButton("settings", "settings", 5),
// wg.SideBarButton("log", "log", 10),
wg.SideBarButton("help", "help", 8),
// wg.SideBarButton(" ", " ", 11),
// wg.SideBarButton("quit", "quit", 11),
},
)
a.ButtonBar(
[]l.Widget{
// gel.EmptyMaxWidth(),
// wg.PageTopBarButton(
// "goroutines", 0, &icons.ActionBugReport, func(name string) {
// wg.App.ActivePage(name)
// }, a, "",
// ),
wg.PageTopBarButton(
"help", 1, &icons.ActionHelp, func(name string) {
wg.MainApp.ActivePage(name)
}, a, "",
),
wg.PageTopBarButton(
"home", 4, &icons.ActionLockOpen, func(name string) {
wg.unlockPassword.Wipe()
wg.unlockPassword.Focus()
wg.WalletWatcher.Q()
// if wg.WalletClient != nil {
// wg.WalletClient.Disconnect()
// wg.WalletClient = nil
// }
// wg.wallet.Stop()
// wg.node.Stop()
wg.State.SetActivePage("home")
wg.unlockPage.ActivePage("home")
wg.stateLoaded.Store(false)
wg.ready.Store(false)
}, a, "green",
),
// wg.Flex().Rigid(wg.Inset(0.5, gel.EmptySpace(0, 0)).Fn).Fn,
// wg.PageTopBarButton(
// "quit", 3, &icons.ActionExitToApp, func(name string) {
// wg.MainApp.ActivePage(name)
// }, a, "",
// ),
},
)
a.StatusBar(
[]l.Widget{
// func(gtx l.Context) l.Dimensions { return wg.RunStatusPanel(gtx) },
// wg.Inset(0.5, gel.EmptySpace(0, 0)).Fn,
// wg.Inset(0.5, gel.EmptySpace(0, 0)).Fn,
wg.RunStatusPanel,
},
[]l.Widget{
// gel.EmptyMaxWidth(),
wg.StatusBarButton(
"console", 3, &p9icons.Terminal, func(name string) {
wg.MainApp.ActivePage(name)
}, a,
),
wg.StatusBarButton(
"log", 4, &icons.ActionList, func(name string) {
D.Ln("click on button", name)
if wg.MainApp.MenuOpen {
wg.MainApp.MenuOpen = false
}
wg.MainApp.ActivePage(name)
}, a,
),
wg.StatusBarButton(
"settings", 5, &icons.ActionSettings, func(name string) {
D.Ln("click on button", name)
if wg.MainApp.MenuOpen {
wg.MainApp.MenuOpen = false
}
wg.MainApp.ActivePage(name)
}, a,
),
// wg.Inset(0.5, gel.EmptySpace(0, 0)).Fn,
},
)
// a.PushOverlay(wg.toasts.DrawToasts())
// a.PushOverlay(wg.dialog.DrawDialog())
return
}
func (wg *WalletGUI) Page(title string, widget gel.Widgets) func(gtx l.Context) l.Dimensions {
return func(gtx l.Context) l.Dimensions {
return wg.VFlex().
// SpaceEvenly().
Rigid(
wg.Responsive(
wg.Size.Load(), gel.Widgets{
// p9.WidgetSize{
// Widget: a.ButtonInset(0.25, a.H5(title).Color(wg.App.BodyColorGet()).Fn).Fn,
// },
gel.WidgetSize{
// Size: 800,
Widget: gel.EmptySpace(0, 0),
// a.ButtonInset(0.25, a.Caption(title).Color(wg.BodyColorGet()).Fn).Fn,
},
},
).Fn,
).
Flexed(
1,
wg.Inset(
0.25,
wg.Responsive(wg.Size.Load(), widget).Fn,
).Fn,
).Fn(gtx)
}
}
func (wg *WalletGUI) SideBarButton(title, page string, index int) func(gtx l.Context) l.Dimensions {
return func(gtx l.Context) l.Dimensions {
var scale float32
scale = gel.Scales["H6"]
var color string
background := "Transparent"
color = "DocText"
var ins float32 = 0.5
// var hl = false
if wg.MainApp.ActivePageGet() == page || wg.MainApp.PreRendering {
background = "PanelBg"
scale = gel.Scales["H6"]
color = "DocText"
// ins = 0.5
// hl = true
}
if title == " " {
scale = gel.Scales["H6"] / 2
}
max := int(wg.MainApp.SideBarSize.V)
if max > 0 {
gtx.Constraints.Max.X = max
gtx.Constraints.Min.X = max
}
// D.Ln("sideMAXXXXXX!!", max)
return wg.Direction().E().Embed(
wg.ButtonLayout(wg.sidebarButtons[index]).
CornerRadius(scale).Corners(0).
Background(background).
Embed(
wg.Inset(
ins,
func(gtx l.Context) l.Dimensions {
return wg.H5(title).
Color(color).
Alignment(text.End).
Fn(gtx)
},
).Fn,
).
SetClick(
func() {
if wg.MainApp.MenuOpen {
wg.MainApp.MenuOpen = false
}
wg.MainApp.ActivePage(page)
},
).
Fn,
).
Fn(gtx)
}
}
func (wg *WalletGUI) PageTopBarButton(
name string, index int, ico *[]byte, onClick func(string), app *gel.App,
highlightColor string,
) func(gtx l.Context) l.Dimensions {
return func(gtx l.Context) l.Dimensions {
background := "Transparent"
// background := app.TitleBarBackgroundGet()
color := app.MenuColorGet()
if app.ActivePageGet() == name {
color = "PanelText"
// background = "scrim"
background = "PanelBg"
}
// if name == "home" {
// background = "scrim"
// }
if highlightColor != "" {
color = highlightColor
}
ic := wg.Icon().
Scale(gel.Scales["H5"]).
Color(color).
Src(ico).
Fn
return wg.Flex().Rigid(
// wg.ButtonInset(0.25,
wg.ButtonLayout(wg.buttonBarButtons[index]).
CornerRadius(0).
Embed(
wg.Inset(
0.375,
ic,
).Fn,
).
Background(background).
SetClick(func() { onClick(name) }).
Fn,
// ).Fn,
).Fn(gtx)
}
}
func (wg *WalletGUI) StatusBarButton(
name string,
index int,
ico *[]byte,
onClick func(string),
app *gel.App,
) func(gtx l.Context) l.Dimensions {
return func(gtx l.Context) l.Dimensions {
background := app.StatusBarBackgroundGet()
color := app.StatusBarColorGet()
if app.ActivePageGet() == name {
// background, color = color, background
background = "PanelBg"
// color = "Danger"
}
ic := wg.Icon().
Scale(gel.Scales["H5"]).
Color(color).
Src(ico).
Fn
return wg.Flex().
Rigid(
wg.ButtonLayout(wg.statusBarButtons[index]).
CornerRadius(0).
Embed(
wg.Inset(0.25, ic).Fn,
).
Background(background).
SetClick(func() { onClick(name) }).
Fn,
).Fn(gtx)
}
}
func (wg *WalletGUI) SetNodeRunState(b bool) {
go func() {
D.Ln("node run state is now", b)
if b {
wg.node.Start()
} else {
wg.node.Stop()
}
}()
}
func (wg *WalletGUI) SetWalletRunState(b bool) {
go func() {
D.Ln("node run state is now", b)
if b {
wg.wallet.Start()
} else {
wg.wallet.Stop()
}
}()
}
func (wg *WalletGUI) RunStatusPanel(gtx l.Context) l.Dimensions {
return func(gtx l.Context) l.Dimensions {
t, f := &p9icons.Link, &p9icons.LinkOff
var runningIcon *[]byte
if wg.node.Running() {
runningIcon = t
} else {
runningIcon = f
}
miningIcon := &p9icons.Mine
if !wg.miner.Running() {
miningIcon = &p9icons.NoMine
}
controllerIcon := &icons.NotificationSyncDisabled
if wg.cx.Config.Controller.True() {
controllerIcon = &icons.NotificationSync
}
discoverColor :=
"DocText"
discoverIcon :=
&icons.DeviceWiFiTethering
if wg.cx.Config.Discovery.False() {
discoverIcon =
&icons.CommunicationPortableWiFiOff
discoverColor =
"scrim"
}
clr := "scrim"
if wg.cx.Config.Controller.True() {
clr = "DocText"
}
clr2 := "DocText"
if wg.cx.Config.GenThreads.V() == 0 {
clr2 = "scrim"
}
// background := wg.App.StatusBarBackgroundGet()
color := wg.MainApp.StatusBarColorGet()
ic := wg.Icon().
Scale(gel.Scales["H5"]).
Color(color).
Src(&icons.NavigationRefresh).
Fn
return wg.Flex().AlignMiddle().
Rigid(
wg.ButtonLayout(wg.statusBarButtons[0]).
CornerRadius(0).
Embed(
wg.Inset(
0.25,
wg.Icon().
Scale(gel.Scales["H5"]).
Color("DocText").
Src(runningIcon).
Fn,
).Fn,
).
Background(wg.MainApp.StatusBarBackgroundGet()).
SetClick(
func() {
go func() {
D.Ln("clicked node run control button", wg.node.Running())
// wg.toggleNode()
wg.unlockPassword.Wipe()
wg.unlockPassword.Focus()
if wg.node.Running() {
if wg.wallet.Running() {
wg.WalletWatcher.Q()
}
wg.node.Stop()
wg.ready.Store(false)
wg.stateLoaded.Store(false)
wg.State.SetActivePage("home")
} else {
wg.node.Start()
// wg.ready.Store(true)
// wg.stateLoaded.Store(true)
}
}()
},
).
Fn,
).
Rigid(
wg.Inset(
0.33,
wg.Body1(fmt.Sprintf("%d", wg.State.bestBlockHeight.Load())).
Font("go regular").TextScale(gel.Scales["Caption"]).
Color("DocText").
Fn,
).Fn,
).
Rigid(
wg.ButtonLayout(wg.statusBarButtons[6]).
CornerRadius(0).
Embed(
wg.Inset(
0.25,
wg.Icon().
Scale(gel.Scales["H5"]).
Color(discoverColor).
Src(discoverIcon).
Fn,
).Fn,
).
Background(wg.MainApp.StatusBarBackgroundGet()).
SetClick(
func() {
go func() {
wg.cx.Config.Discovery.Flip()
_ = wg.cx.Config.WriteToFile(wg.cx.Config.ConfigFile.V())
I.Ln("discover enabled:",
wg.cx.Config.Discovery.True())
}()
},
).
Fn,
).
Rigid(
wg.Inset(
0.33,
wg.Caption(fmt.Sprintf("%d LAN %d", len(wg.otherNodes), wg.peerCount.Load())).
Font("go regular").
Color("DocText").
Fn,
).Fn,
).
Rigid(
wg.ButtonLayout(wg.statusBarButtons[7]).
CornerRadius(0).
Embed(
wg.
Inset(
0.25, wg.
Icon().
Scale(gel.Scales["H5"]).
Color(clr).
Src(controllerIcon).Fn,
).Fn,
).
Background(wg.MainApp.StatusBarBackgroundGet()).
SetClick(
func() {
if wg.ChainClient != nil && !wg.ChainClient.Disconnected() {
wg.cx.Config.Controller.Flip()
I.Ln("controller running:",
wg.cx.Config.Controller.True())
var e error
if e = wg.ChainClient.SetGenerate(
wg.cx.Config.Controller.True(),
wg.cx.Config.GenThreads.V(),
); !E.Chk(e) {
}
}
// // wg.toggleMiner()
// go func() {
// if wg.miner.Running() {
// *wg.cx.Config.Generate = false
// wg.miner.Stop()
// } else {
// wg.miner.Start()
// *wg.cx.Config.Generate = true
// }
// save.Save(wg.cx.Config)
// }()
},
).
Fn,
).
Rigid(
wg.ButtonLayout(wg.statusBarButtons[1]).
CornerRadius(0).
Embed(
wg.Inset(
0.25, wg.
Icon().
Scale(gel.Scales["H5"]).
Color(clr2).
Src(miningIcon).Fn,
).Fn,
).
Background(wg.MainApp.StatusBarBackgroundGet()).
SetClick(
func() {
// wg.toggleMiner()
go func() {
if wg.cx.Config.GenThreads.V() != 0 {
if wg.miner.Running() {
wg.cx.Config.Generate.F()
wg.miner.Stop()
} else {
wg.miner.Start()
wg.cx.Config.Generate.T()
}
_ = wg.cx.Config.WriteToFile(wg.cx.Config.ConfigFile.V())
}
}()
},
).
Fn,
).
Rigid(
func(gtx l.Context) l.Dimensions {
return wg.incdecs["generatethreads"].
// Color("DocText").
// Background(wg.MainApp.StatusBarBackgroundGet()).
Fn(gtx)
},
).
Rigid(
func(gtx l.Context) l.Dimensions {
if !wg.wallet.Running() {
return l.Dimensions{}
}
return wg.Flex().
Rigid(
wg.ButtonLayout(wg.statusBarButtons[2]).
CornerRadius(0).
Embed(
wg.Inset(0.25, ic).Fn,
).
Background(wg.MainApp.StatusBarBackgroundGet()).
SetClick(
func() {
D.Ln("clicked reset wallet button")
go func() {
var e error
wasRunning := wg.wallet.Running()
D.Ln("was running", wasRunning)
if wasRunning {
wg.wallet.Stop()
}
args := []string{
os.Args[0],
"DD"+
wg.cx.Config.DataDir.V(),
"pipelog",
"walletpass"+
wg.cx.Config.WalletPass.V(),
"wallet",
"drophistory",
}
runner := exec.Command(args[0], args[1:]...)
runner.Stderr = os.Stderr
runner.Stdout = os.Stderr
if e = wg.writeWalletCookie(); E.Chk(e) {
}
if e = runner.Run(); E.Chk(e) {
}
if wasRunning {
wg.wallet.Start()
}
}()
},
).
Fn,
).Fn(gtx)
},
).
Fn(gtx)
}(gtx)
}
func (wg *WalletGUI) writeWalletCookie() (e error) {
// for security with apps launching the wallet, the public password can be set with a file that is deleted after
walletPassPath := filepath.Join(wg.cx.Config.DataDir.V(), wg.cx.ActiveNet.Name, "wp.txt")
D.Ln("runner", walletPassPath)
b := wg.cx.Config.WalletPass.Bytes()
if e = ioutil.WriteFile(walletPassPath, b, 0700); E.Chk(e) {
}
D.Ln("created password cookie")
return
}
//
// func (wg *WalletGUI) toggleNode() {
// if wg.node.Running() {
// wg.node.Stop()
// *wg.cx.Config.NodeOff = true
// } else {
// wg.node.Start()
// *wg.cx.Config.NodeOff = false
// }
// save.Save(wg.cx.Config)
// }
//
// func (wg *WalletGUI) startNode() {
// if !wg.node.Running() {
// wg.node.Start()
// }
// D.Ln("startNode")
// }
//
// func (wg *WalletGUI) stopNode() {
// if wg.wallet.Running() {
// wg.stopWallet()
// wg.unlockPassword.Wipe()
// // wg.walletLocked.Store(true)
// }
// if wg.node.Running() {
// wg.node.Stop()
// }
// D.Ln("stopNode")
// }
//
// func (wg *WalletGUI) toggleMiner() {
// if wg.miner.Running() {
// wg.miner.Stop()
// *wg.cx.Config.Generate = false
// }
// if !wg.miner.Running() && *wg.cx.Config.GenThreads > 0 {
// wg.miner.Start()
// *wg.cx.Config.Generate = true
// }
// save.Save(wg.cx.Config)
// }
//
// func (wg *WalletGUI) startMiner() {
// if *wg.cx.Config.GenThreads == 0 && wg.miner.Running() {
// wg.stopMiner()
// D.Ln("was zero threads")
// } else {
// wg.miner.Start()
// D.Ln("startMiner")
// }
// }
//
// func (wg *WalletGUI) stopMiner() {
// if wg.miner.Running() {
// wg.miner.Stop()
// }
// D.Ln("stopMiner")
// }
//
// func (wg *WalletGUI) toggleWallet() {
// if wg.wallet.Running() {
// wg.stopWallet()
// *wg.cx.Config.WalletOff = true
// } else {
// wg.startWallet()
// *wg.cx.Config.WalletOff = false
// }
// save.Save(wg.cx.Config)
// }
//
// func (wg *WalletGUI) startWallet() {
// if !wg.node.Running() {
// wg.startNode()
// }
// if !wg.wallet.Running() {
// wg.wallet.Start()
// wg.unlockPassword.Wipe()
// // wg.walletLocked.Store(false)
// }
// D.Ln("startWallet")
// }
//
// func (wg *WalletGUI) stopWallet() {
// if wg.wallet.Running() {
// wg.wallet.Stop()
// // wg.unlockPassword.Wipe()
// // wg.walletLocked.Store(true)
// }
// wg.unlockPassword.Wipe()
// D.Ln("stopWallet")
// }

588
cmd/gui/cfg/config.go Normal file
View File

@@ -0,0 +1,588 @@
package cfg
import (
"sort"
"golang.org/x/exp/shiny/materialdesign/icons"
"github.com/p9c/p9/pkg/gel/gio/text"
l "github.com/p9c/p9/pkg/gel/gio/layout"
"github.com/p9c/p9/pkg/gel"
)
type Item struct {
slug string
typ string
label string
description string
widget string
dataType string
options []string
Slot interface{}
}
func (it *Item) Item(ng *Config) l.Widget {
return func(gtx l.Context) l.Dimensions {
return ng.Theme.VFlex().Rigid(
ng.H6(it.label).Fn,
).Fn(gtx)
}
}
type ItemMap map[string]*Item
type GroupsMap map[string]ItemMap
type ListItem struct {
name string
widget func() []l.Widget
}
type ListItems []ListItem
func (l ListItems) Len() int {
return len(l)
}
func (l ListItems) Less(i, j int) bool {
return l[i].name < l[j].name
}
func (l ListItems) Swap(i, j int) {
l[i], l[j] = l[j], l[i]
}
type List struct {
name string
items ListItems
}
type Lists []List
func (l Lists) Len() int {
return len(l)
}
func (l Lists) Less(i, j int) bool {
return l[i].name < l[j].name
}
func (l Lists) Swap(i, j int) {
l[i], l[j] = l[j], l[i]
}
func (c *Config) Config() GroupsMap {
// schema := podcfg.GetConfigSchema(c.cx.Config)
tabNames := make(GroupsMap)
// // tabs := make(p9.WidgetMap)
// for i := range schema.Groups {
// for j := range schema.Groups[i].Fields {
// sgf := schema.Groups[i].Fields[j]
// if _, ok := tabNames[sgf.Group]; !ok {
// tabNames[sgf.Group] = make(ItemMap)
// }
// tabNames[sgf.Group][sgf.Slug] = &Item{
// slug: sgf.Slug,
// typ: sgf.Type,
// label: sgf.Label,
// description: sgf.Title,
// widget: sgf.Widget,
// dataType: sgf.Datatype,
// options: sgf.Options,
// Slot: c.cx.ConfigMap[sgf.Slug],
// }
// // D.S(sgf)
// // create all the necessary widgets required before display
// tgs := tabNames[sgf.Group][sgf.Slug]
// switch sgf.Widget {
// case "toggle":
// c.Bools[sgf.Slug] = c.Bool(*tgs.Slot.(*bool)).SetOnChange(
// func(b bool) {
// D.Ln(sgf.Slug, "submitted", b)
// bb := c.cx.ConfigMap[sgf.Slug].(*bool)
// *bb = b
// podcfg.Save(c.cx.Config)
// if sgf.Slug == "DarkTheme" {
// c.Theme.Colors.SetTheme(b)
// }
// },
// )
// case "integer":
// c.inputs[sgf.Slug] = c.Input(
// fmt.Sprint(*tgs.Slot.(*int)), sgf.Slug, "DocText", "DocBg", "PanelBg", func(txt string) {
// D.Ln(sgf.Slug, "submitted", txt)
// i := c.cx.ConfigMap[sgf.Slug].(*int)
// if n, e := strconv.Atoi(txt); !E.Chk(e) {
// *i = n
// }
// podcfg.Save(c.cx.Config)
// }, nil,
// )
// case "time":
// c.inputs[sgf.Slug] = c.Input(
// fmt.Sprint(
// *tgs.Slot.(*time.
// Duration),
// ), sgf.Slug, "DocText", "DocBg", "PanelBg", func(txt string) {
// D.Ln(sgf.Slug, "submitted", txt)
// tt := c.cx.ConfigMap[sgf.Slug].(*time.Duration)
// if d, e := time.ParseDuration(txt); !E.Chk(e) {
// *tt = d
// }
// podcfg.Save(c.cx.Config)
// }, nil,
// )
// case "float":
// c.inputs[sgf.Slug] = c.Input(
// strconv.FormatFloat(
// *tgs.Slot.(
// *float64), 'f', -1, 64,
// ), sgf.Slug, "DocText", "DocBg", "PanelBg", func(txt string) {
// D.Ln(sgf.Slug, "submitted", txt)
// ff := c.cx.ConfigMap[sgf.Slug].(*float64)
// if f, e := strconv.ParseFloat(txt, 64); !E.Chk(e) {
// *ff = f
// }
// podcfg.Save(c.cx.Config)
// }, nil,
// )
// case "string":
// c.inputs[sgf.Slug] = c.Input(
// *tgs.Slot.(*string), sgf.Slug, "DocText", "DocBg", "PanelBg", func(txt string) {
// D.Ln(sgf.Slug, "submitted", txt)
// ss := c.cx.ConfigMap[sgf.Slug].(*string)
// *ss = txt
// podcfg.Save(c.cx.Config)
// }, nil,
// )
// case "password":
// c.passwords[sgf.Slug] = c.Password(
// "password",
// tgs.Slot.(*string), "DocText", "DocBg", "PanelBg",
// func(txt string) {
// D.Ln(sgf.Slug, "submitted", txt)
// pp := c.cx.ConfigMap[sgf.Slug].(*string)
// *pp = txt
// podcfg.Save(c.cx.Config)
// },
// )
// case "multi":
// c.multis[sgf.Slug] = c.Multiline(
// tgs.Slot.(*cli.StringSlice), "DocText", "DocBg", "PanelBg", 30, func(txt []string) {
// D.Ln(sgf.Slug, "submitted", txt)
// sss := c.cx.ConfigMap[sgf.Slug].(*cli.StringSlice)
// *sss = txt
// podcfg.Save(c.cx.Config)
// },
// )
// // c.multis[sgf.Slug]
// case "radio":
// c.checkables[sgf.Slug] = c.Checkable()
// for i := range sgf.Options {
// c.checkables[sgf.Slug+sgf.Options[i]] = c.Checkable()
// }
// txt := *tabNames[sgf.Group][sgf.Slug].Slot.(*string)
// c.enums[sgf.Slug] = c.Enum().SetValue(txt).SetOnChange(
// func(value string) {
// rr := c.cx.ConfigMap[sgf.Slug].(*string)
// *rr = value
// podcfg.Save(c.cx.Config)
// },
// )
// c.lists[sgf.Slug] = c.List()
// }
// }
// }
// D.S(tabNames)
return tabNames // .Widget(c)
// return func(gtx l.Context) l.Dimensions {
// return l.Dimensions{}
// }
}
func (gm GroupsMap) Widget(ng *Config) l.Widget {
// _, file, line, _ := runtime.Caller(2)
// D.F("%s:%d", file, line)
var groups Lists
for i := range gm {
var li ListItems
gmi := gm[i]
for j := range gmi {
gmij := gmi[j]
li = append(
li, ListItem{
name: j,
widget: func() []l.Widget {
return ng.RenderConfigItem(gmij, len(li))
},
// },
},
)
}
sort.Sort(li)
groups = append(groups, List{name: i, items: li})
}
sort.Sort(groups)
var out []l.Widget
first := true
for i := range groups {
// D.Ln(groups[i].name)
g := groups[i]
if !first {
// put a space between the sections
out = append(
out, func(gtx l.Context) l.Dimensions {
dims := ng.VFlex().
// Rigid(
// // ng.Inset(0.25,
// ng.Fill("DocBg", l.Center, ng.TextSize.True, l.S, ng.Inset(0.25,
// gel.EmptyMaxWidth()).Fn,
// ).Fn,
// // ).Fn,
// ).
Rigid(ng.Inset(0.25, gel.EmptyMaxWidth()).Fn).
// Rigid(
// // ng.Inset(0.25,
// ng.Fill("DocBg", l.Center, ng.TextSize.True, l.N, ng.Inset(0.25,
// gel.EmptyMaxWidth()).Fn,
// ).Fn,
// // ).Fn,
// ).
Fn(gtx)
// ng.Fill("PanelBg", gel.EmptySpace(gtx.Constraints.Max.X, gtx.Constraints.Max.Y), l.Center, 0).Fn(gtx)
return dims
},
)
// out = append(out, func(gtx l.Context) l.Dimensions {
// return ng.th.ButtonInset(0.25, p9.EmptySpace(0, 0)).Fn(gtx)
// })
} else {
first = false
}
// put in the header
out = append(
out,
ng.Fill(
"Primary", l.Center, ng.TextSize.V*2, 0, ng.Flex().Flexed(
1,
ng.Inset(
0.75,
ng.H3(g.name).
Color("DocText").
Alignment(text.Start).
Fn,
).Fn,
).Fn,
).Fn,
)
// out = append(out, func(gtx l.Context) l.Dimensions {
// return ng.th.Fill("PanelBg",
// ng.th.ButtonInset(0.25,
// ng.th.Flex().Flexed(1,
// p9.EmptyMaxWidth(),
// ).Fn,
// ).Fn,
// ).Fn(gtx)
// })
// add the widgets
for j := range groups[i].items {
gi := groups[i].items[j]
for x := range gi.widget() {
k := x
out = append(
out, func(gtx l.Context) l.Dimensions {
if k < len(gi.widget()) {
return ng.Fill(
"DocBg", l.Center, ng.TextSize.V, 0, ng.Flex().
// Rigid(
// ng.Inset(0.25, gel.EmptySpace(0, 0)).Fn,
// ).
Rigid(
ng.Inset(
0.25,
gi.widget()[k],
).Fn,
).Fn,
).Fn(gtx)
}
return l.Dimensions{}
},
)
}
}
}
le := func(gtx l.Context, index int) l.Dimensions {
return out[index](gtx)
}
return func(gtx l.Context) l.Dimensions {
// clip.UniformRRect(f32.Rectangle{
// Max: f32.Pt(float32(gtx.Constraints.Max.X), float32(gtx.Constraints.Max.Y)),
// }, ng.TextSize.True/2).Add(gtx.Ops)
return ng.Fill(
"DocBg", l.Center, ng.TextSize.V, 0, ng.Inset(
0.25,
ng.lists["settings"].
Vertical().
Length(len(out)).
// Background("PanelBg").
// Color("DocBg").
// Active("Primary").
ListElement(le).
Fn,
).Fn,
).Fn(gtx)
}
}
// RenderConfigItem renders a config item. It takes a position variable which tells it which index it begins on
// the bigger config widget list, with this and its current data set the multi can insert and delete elements above
// its add button without rerendering the config item or worse, the whole config widget
func (c *Config) RenderConfigItem(item *Item, position int) []l.Widget {
switch item.widget {
case "toggle":
return c.RenderToggle(item)
case "integer":
return c.RenderInteger(item)
case "time":
return c.RenderTime(item)
case "float":
return c.RenderFloat(item)
case "string":
return c.RenderString(item)
case "password":
return c.RenderPassword(item)
case "multi":
return c.RenderMulti(item, position)
case "radio":
return c.RenderRadio(item)
}
D.Ln("fallthrough", item.widget)
return []l.Widget{func(l.Context) l.Dimensions { return l.Dimensions{} }}
}
func (c *Config) RenderToggle(item *Item) []l.Widget {
return []l.Widget{
func(gtx l.Context) l.Dimensions {
return c.Inset(
0.25,
c.Flex().
Rigid(
c.Switch(c.Bools[item.slug]).DisabledColor("Light").Fn,
).
Flexed(
1,
c.VFlex().
Rigid(
c.Body1(item.label).Fn,
).
Rigid(
c.Caption(item.description).Fn,
).
Fn,
).Fn,
).Fn(gtx)
},
}
}
func (c *Config) RenderInteger(item *Item) []l.Widget {
return []l.Widget{
func(gtx l.Context) l.Dimensions {
return c.Inset(
0.25,
c.Flex().Flexed(
1,
c.VFlex().
Rigid(
c.Body1(item.label).Fn,
).
Rigid(
c.inputs[item.slug].Fn,
).
Rigid(
c.Caption(item.description).Fn,
).
Fn,
).Fn,
).Fn(gtx)
},
}
}
func (c *Config) RenderTime(item *Item) []l.Widget {
return []l.Widget{
func(gtx l.Context) l.Dimensions {
return c.Inset(
0.25,
c.Flex().Flexed(
1,
c.VFlex().
Rigid(
c.Body1(item.label).Fn,
).
Rigid(
c.inputs[item.slug].Fn,
).
Rigid(
c.Caption(item.description).Fn,
).
Fn,
).Fn,
).
Fn(gtx)
},
}
}
func (c *Config) RenderFloat(item *Item) []l.Widget {
return []l.Widget{
func(gtx l.Context) l.Dimensions {
return c.Inset(
0.25,
c.Flex().Flexed(
1,
c.VFlex().
Rigid(
c.Body1(item.label).Fn,
).
Rigid(
c.inputs[item.slug].Fn,
).
Rigid(
c.Caption(item.description).Fn,
).
Fn,
).Fn,
).
Fn(gtx)
},
}
}
func (c *Config) RenderString(item *Item) []l.Widget {
return []l.Widget{
c.Inset(
0.25,
c.Flex().Flexed(
1,
c.VFlex().
Rigid(
c.Body1(item.label).Fn,
).
Rigid(
c.inputs[item.slug].Fn,
).
Rigid(
c.Caption(item.description).Fn,
).
Fn,
).Fn,
).
Fn,
}
}
func (c *Config) RenderPassword(item *Item) []l.Widget {
return []l.Widget{
c.Inset(
0.25,
c.Flex().Flexed(
1,
c.VFlex().
Rigid(
c.Body1(item.label).Fn,
).
Rigid(
c.passwords[item.slug].Fn,
).
Rigid(
c.Caption(item.description).Fn,
).
Fn,
).Fn,
).
Fn,
}
}
func (c *Config) RenderMulti(item *Item, position int) []l.Widget {
// D.Ln("rendering multi")
// c.multis[item.slug].
w := []l.Widget{
func(gtx l.Context) l.Dimensions {
return c.Inset(
0.25,
c.Flex().Flexed(
1,
c.VFlex().
Rigid(
c.Body1(item.label).Fn,
).
Rigid(
c.Caption(item.description).Fn,
).Fn,
).Fn,
).
Fn(gtx)
},
}
widgets := c.multis[item.slug].Widgets()
// D.Ln(widgets)
w = append(w, widgets...)
return w
}
func (c *Config) RenderRadio(item *Item) []l.Widget {
out := func(gtx l.Context) l.Dimensions {
var options []l.Widget
for i := range item.options {
var color string
color = "PanelBg"
if c.enums[item.slug].Value() == item.options[i] {
color = "Primary"
}
options = append(
options,
c.RadioButton(
c.checkables[item.slug+item.options[i]].
Color("DocText").
IconColor(color).
CheckedStateIcon(&icons.ToggleRadioButtonChecked).
UncheckedStateIcon(&icons.ToggleRadioButtonUnchecked),
c.enums[item.slug], item.options[i], item.options[i],
).Fn,
)
}
return c.Inset(
0.25,
c.VFlex().
Rigid(
c.Body1(item.label).Fn,
).
Rigid(
c.Flex().
Rigid(
func(gtx l.Context) l.Dimensions {
gtx.Constraints.Max.X = int(c.Theme.TextSize.Scale(10).V)
return c.lists[item.slug].DisableScroll(true).Slice(gtx, options...)(gtx)
// // return c.lists[item.slug].Length(len(options)).Vertical().ListElement(func(gtx l.Context, index int) l.Dimensions {
// // return options[index](gtx)
// // }).Fn(gtx)
// return c.lists[item.slug].Slice(gtx, options...)(gtx)
// // return l.Dimensions{}
},
).
Flexed(
1,
c.Caption(item.description).Fn,
).
Fn,
).Fn,
).
Fn(gtx)
}
return []l.Widget{out}
}

43
cmd/gui/cfg/log.go Normal file
View File

@@ -0,0 +1,43 @@
package cfg
import (
"github.com/p9c/p9/pkg/log"
"github.com/p9c/p9/version"
)
var subsystem = log.AddLoggerSubsystem(version.PathBase)
var F, E, W, I, D, T log.LevelPrinter = log.GetLogPrinterSet(subsystem)
func init() {
// to filter out this package, uncomment the following
// var _ = logg.AddFilteredSubsystem(subsystem)
// to highlight this package, uncomment the following
// var _ = logg.AddHighlightedSubsystem(subsystem)
// these are here to test whether they are working
// F.Ln("F.Ln")
// E.Ln("E.Ln")
// W.Ln("W.Ln")
// I.Ln("I.Ln")
// D.Ln("D.Ln")
// F.Ln("T.Ln")
// F.F("%s", "F.F")
// E.F("%s", "E.F")
// W.F("%s", "W.F")
// I.F("%s", "I.F")
// D.F("%s", "D.F")
// T.F("%s", "T.F")
// F.C(func() string { return "F.C" })
// E.C(func() string { return "E.C" })
// W.C(func() string { return "W.C" })
// I.C(func() string { return "I.C" })
// D.C(func() string { return "D.C" })
// T.C(func() string { return "T.C" })
// F.C(func() string { return "F.C" })
// E.Chk(errors.New("E.Chk"))
// W.Chk(errors.New("W.Chk"))
// I.Chk(errors.New("I.Chk"))
// D.Chk(errors.New("D.Chk"))
// T.Chk(errors.New("T.Chk"))
}

62
cmd/gui/cfg/main.go Normal file
View File

@@ -0,0 +1,62 @@
package cfg
import (
"github.com/p9c/p9/pkg/qu"
"github.com/p9c/p9/pkg/gel"
)
func New(w *gel.Window, killAll qu.C) *Config {
cfg := &Config{
Window: w,
// cx: cx,
quit: killAll,
}
// cfg.Theme = cx.App
return cfg.Init()
}
type Config struct {
// cx *state.State
*gel.Window
Bools map[string]*gel.Bool
lists map[string]*gel.List
enums map[string]*gel.Enum
checkables map[string]*gel.Checkable
clickables map[string]*gel.Clickable
editors map[string]*gel.Editor
inputs map[string]*gel.Input
multis map[string]*gel.Multi
configs GroupsMap
passwords map[string]*gel.Password
quit qu.C
}
func (c *Config) Init() *Config {
c.Theme.SetDarkTheme(c.Theme.Dark.True())
c.enums = map[string]*gel.Enum{
// "runmode": ng.th.Enum().SetValue(ng.runMode),
}
c.Bools = map[string]*gel.Bool{
// "runstate": ng.th.Bool(false).SetOnChange(func(b bool) {
// D.Ln("run state is now", b)
// }),
}
c.lists = map[string]*gel.List{
// "overview": ng.th.List(),
"settings": c.List(),
}
c.clickables = map[string]*gel.Clickable{
// "quit": ng.th.Clickable(),
}
c.checkables = map[string]*gel.Checkable{
// "runmodenode": ng.th.Checkable(),
// "runmodewallet": ng.th.Checkable(),
// "runmodeshell": ng.th.Checkable(),
}
c.editors = make(map[string]*gel.Editor)
c.inputs = make(map[string]*gel.Input)
c.multis = make(map[string]*gel.Multi)
c.passwords = make(map[string]*gel.Password)
return c
}

592
cmd/gui/console.go Normal file
View File

@@ -0,0 +1,592 @@
package gui
import (
"encoding/json"
"fmt"
"regexp"
"sort"
"strconv"
"strings"
"github.com/atotto/clipboard"
"golang.org/x/exp/shiny/materialdesign/icons"
l "github.com/p9c/p9/pkg/gel/gio/layout"
ctl2 "github.com/p9c/p9/cmd/ctl"
icons2 "golang.org/x/exp/shiny/materialdesign/icons"
"github.com/p9c/p9/pkg/gel"
)
type Console struct {
*gel.Window
output []l.Widget
outputList *gel.List
editor *gel.Editor
clearClickable *gel.Clickable
clearButton *gel.IconButton
copyClickable *gel.Clickable
copyButton *gel.IconButton
pasteClickable *gel.Clickable
pasteButton *gel.IconButton
submitFunc func(txt string)
clickables []*gel.Clickable
}
var findSpaceRegexp = regexp.MustCompile(`\s+`)
func (wg *WalletGUI) ConsolePage() *Console {
D.Ln("running ConsolePage")
c := &Console{
Window: wg.Window,
editor: wg.Editor().SingleLine().Submit(true),
clearClickable: wg.Clickable(),
copyClickable: wg.Clickable(),
pasteClickable: wg.Clickable(),
outputList: wg.List().ScrollToEnd(),
}
c.submitFunc = func(txt string) {
go func() {
D.Ln("submit", txt)
c.output = append(
c.output,
func(gtx l.Context) l.Dimensions {
return wg.VFlex().
Rigid(wg.Inset(0.25, gel.EmptySpace(0, 0)).Fn).
Rigid(
wg.Flex().
Flexed(
1,
wg.Body2(txt).Color("DocText").Font("bariol bold").Fn,
).
Fn,
).Fn(gtx)
},
)
c.editor.SetText("")
split := strings.Split(txt, " ")
method, args := split[0], split[1:]
var params []interface{}
var e error
var result []byte
var o string
var errString, prev string
for i := range args {
params = append(params, args[i])
}
if method == "clear" || method == "cls" {
// clear the list of display widgets
c.output = c.output[:0]
// free up the pool widgets used in the current output
for i := range c.clickables {
wg.WidgetPool.FreeClickable(c.clickables[i])
}
c.clickables = c.clickables[:0]
return
}
if method == "help" {
if len(args) == 0 {
D.Ln("rpc called help")
var result1, result2 []byte
if result1, e = ctl2.Call(wg.cx.Config, false, method, params...); E.Chk(e) {
}
r1 := string(result1)
if r1, e = strconv.Unquote(r1); E.Chk(e) {
}
o = r1 + "\n"
if result2, e = ctl2.Call(wg.cx.Config, true, method, params...); E.Chk(e) {
}
r2 := string(result2)
if r2, e = strconv.Unquote(r2); E.Chk(e) {
}
o += r2 + "\n"
splitted := strings.Split(o, "\n")
sort.Strings(splitted)
var dedup []string
for i := range splitted {
if i > 0 {
if splitted[i] != prev {
dedup = append(dedup, splitted[i])
}
}
prev = splitted[i]
}
o = strings.Join(dedup, "\n")
if errString != "" {
o += "BTCJSONError:\n"
o += errString
}
splitResult := strings.Split(o, "\n")
const maxPerWidget = 6
for i := 0; i < len(splitResult)-maxPerWidget; i += maxPerWidget {
sri := strings.Join(splitResult[i:i+maxPerWidget], "\n")
c.output = append(
c.output,
func(gtx l.Context) l.Dimensions {
return wg.Flex().
Flexed(
1,
wg.Caption(sri).
Color("DocText").
Font("bariol regular").
MaxLines(maxPerWidget).Fn,
).
Fn(gtx)
},
)
}
return
} else {
var out string
var isErr bool
if result, e = ctl2.Call(wg.cx.Config, false, method, params...); E.Chk(e) {
isErr = true
out = e.Error()
I.Ln(out)
// if out, e = strconv.Unquote(); E.Chk(e) {
// }
} else {
if out, e = strconv.Unquote(string(result)); E.Chk(e) {
}
}
strings.ReplaceAll(out, "\t", " ")
D.Ln(out)
splitResult := strings.Split(out, "\n")
outputColor := "DocText"
if isErr {
outputColor = "Danger"
}
for i := range splitResult {
sri := splitResult[i]
c.output = append(
c.output,
func(gtx l.Context) l.Dimensions {
return c.Theme.Flex().AlignStart().
Rigid(
wg.Body2(sri).
Color(outputColor).
Font("go regular").MaxLines(4).
Fn,
).
Fn(gtx)
},
)
}
return
}
} else {
D.Ln("method", method, "args", args)
if result, e = ctl2.Call(wg.cx.Config, false, method, params...); E.Chk(e) {
var errR string
if result, e = ctl2.Call(wg.cx.Config, true, method, params...); E.Chk(e) {
if e != nil {
errR = e.Error()
}
c.output = append(
c.output, c.Theme.Flex().AlignStart().
Rigid(wg.Body2(errR).Color("Danger").Fn).Fn,
)
return
}
if e != nil {
errR = e.Error()
}
c.output = append(
c.output, c.Theme.Flex().AlignStart().
Rigid(
wg.Body2(errR).Color("Danger").Fn,
).Fn,
)
}
c.output = append(c.output, wg.console.JSONWidget("DocText", result)...)
}
c.outputList.JumpToEnd()
}()
}
clearClickableFn := func() {
c.editor.SetText("")
c.editor.Focus()
}
copyClickableFn := func() {
go func() {
if e := clipboard.WriteAll(c.editor.Text()); E.Chk(e) {
}
}()
c.editor.Focus()
}
pasteClickableFn := func() {
// // col := c.editor.Caret.Col
// go func() {
// txt := c.editor.Text()
// var e error
// var cb string
// if cb, e = clipboard.ReadAll(); E.Chk(e) {
// }
// cb = findSpaceRegexp.ReplaceAllString(cb, " ")
// txt = txt[:col] + cb + txt[col:]
// c.editor.SetText(txt)
// c.editor.Move(col + len(cb))
// }()
}
c.clearButton = wg.IconButton(c.clearClickable.SetClick(clearClickableFn)).
Icon(
wg.Icon().
Color("DocText").
Src(&icons2.ContentBackspace),
).
Background("").
ButtonInset(0.25)
c.copyButton = wg.IconButton(c.copyClickable.SetClick(copyClickableFn)).
Icon(
wg.Icon().
Color("DocText").
Src(&icons2.ContentContentCopy),
).
Background("").
ButtonInset(0.25)
c.pasteButton = wg.IconButton(c.pasteClickable.SetClick(pasteClickableFn)).
Icon(
wg.Icon().
Color("DocText").
Src(&icons2.ContentContentPaste),
).
Background("").
ButtonInset(0.25)
c.output = append(
c.output, func(gtx l.Context) l.Dimensions {
return c.Theme.Flex().AlignStart().Rigid(c.H6("Welcome to the Parallelcoin RPC console").Color("DocText").Fn).Fn(gtx)
}, func(gtx l.Context) l.Dimensions {
return c.Theme.Flex().AlignStart().Rigid(c.Caption("Type 'help' to get available commands and 'clear' or 'cls' to clear the screen").Color("DocText").Fn).Fn(gtx)
},
)
return c
}
func (c *Console) Fn(gtx l.Context) l.Dimensions {
le := func(gtx l.Context, index int) l.Dimensions {
if index >= len(c.output) || index < 0 {
return l.Dimensions{}
} else {
return c.output[index](gtx)
}
}
fn := c.Theme.VFlex().
Flexed(
0.1,
c.Fill(
"PanelBg", l.Center, c.TextSize.V, 0, func(gtx l.Context) l.Dimensions {
return c.Inset(
0.25,
c.outputList.
ScrollToEnd().
End().
Background("PanelBg").
Color("DocBg").
Active("Primary").
Vertical().
Length(len(c.output)).
ListElement(le).
Fn,
).
Fn(gtx)
},
).Fn,
).
Rigid(
c.Fill(
"DocBg", l.Center, c.TextSize.V, 0, c.Inset(
0.25,
c.Theme.Flex().
Flexed(
1,
c.TextInput(c.editor.SetSubmit(c.submitFunc), "enter an rpc command").
Color("DocText").
Fn,
).
Rigid(c.copyButton.Fn).
Rigid(c.pasteButton.Fn).
Rigid(c.clearButton.Fn).
Fn,
).Fn,
).Fn,
).
Fn
return fn(gtx)
}
type JSONElement struct {
key string
value interface{}
}
type JSONElements []JSONElement
func (je JSONElements) Len() int {
return len(je)
}
func (je JSONElements) Less(i, j int) bool {
return je[i].key < je[j].key
}
func (je JSONElements) Swap(i, j int) {
je[i], je[j] = je[j], je[i]
}
func GetJSONElements(in map[string]interface{}) (je JSONElements) {
for i := range in {
je = append(
je, JSONElement{
key: i,
value: in[i],
},
)
}
sort.Sort(je)
return
}
func (c *Console) getIndent(n int, size float32, widget l.Widget) (out l.Widget) {
o := c.Theme.Flex()
for i := 0; i < n; i++ {
o.Rigid(c.Inset(size/2, gel.EmptySpace(0, 0)).Fn)
}
o.Rigid(widget)
out = o.Fn
return
}
func (c *Console) JSONWidget(color string, j []byte) (out []l.Widget) {
var ifc interface{}
var e error
if e = json.Unmarshal(j, &ifc); E.Chk(e) {
}
return c.jsonWidget(color, 0, "", ifc)
}
func (c *Console) jsonWidget(color string, depth int, key string, in interface{}) (out []l.Widget) {
switch in.(type) {
case []interface{}:
if key != "" {
out = append(
out, c.getIndent(
depth, 1,
func(gtx l.Context) l.Dimensions {
return c.Body2(key).Font("bariol bold").Color(color).Fn(gtx)
},
),
)
}
D.Ln("got type []interface{}")
res := in.([]interface{})
if len(res) == 0 {
out = append(
out, c.getIndent(
depth+1, 1,
func(gtx l.Context) l.Dimensions {
return c.Body2("[]").Color(color).Fn(gtx)
},
),
)
} else {
for i := range res {
// D.S(res[i])
out = append(out, c.jsonWidget(color, depth+1, fmt.Sprint(i), res[i])...)
}
}
case map[string]interface{}:
if key != "" {
out = append(
out, c.getIndent(
depth, 1,
func(gtx l.Context) l.Dimensions {
return c.Body2(key).Font("bariol bold").Color(color).Fn(gtx)
},
),
)
}
D.Ln("got type map[string]interface{}")
res := in.(map[string]interface{})
je := GetJSONElements(res)
// D.S(je)
if len(res) == 0 {
out = append(
out, c.getIndent(
depth+1, 1,
func(gtx l.Context) l.Dimensions {
return c.Body2("{}").Color(color).Fn(gtx)
},
),
)
} else {
for i := range je {
D.S(je[i])
out = append(out, c.jsonWidget(color, depth+1, je[i].key, je[i].value)...)
}
}
case JSONElement:
res := in.(JSONElement)
key = res.key
switch res.value.(type) {
case string:
D.Ln("got type string")
res := res.value.(string)
clk := c.Theme.WidgetPool.GetClickable()
out = append(
out,
c.jsonElement(
key, color, depth, func(gtx l.Context) l.Dimensions {
return c.Theme.Flex().
Rigid(c.Body2("\"" + res + "\"").Color(color).Fn).
Rigid(c.Inset(0.25, gel.EmptySpace(0, 0)).Fn).
Rigid(
c.IconButton(clk).
Background("").
ButtonInset(0).
Color(color).
Icon(c.Icon().Color("DocBg").Scale(1).Src(&icons.ContentContentCopy)).
SetClick(
func() {
go func() {
if e := clipboard.WriteAll(res); E.Chk(e) {
}
}()
},
).Fn,
).Fn(gtx)
},
),
)
case float64:
D.Ln("got type float64")
res := res.value.(float64)
clk := c.Theme.WidgetPool.GetClickable()
out = append(
out,
c.jsonElement(
key, color, depth, func(gtx l.Context) l.Dimensions {
return c.Theme.Flex().
Rigid(c.Body2(fmt.Sprint(res)).Color(color).Fn).
Rigid(c.Inset(0.25, gel.EmptySpace(0, 0)).Fn).
Rigid(
c.IconButton(clk).
Background("").
ButtonInset(0).
Color(color).
Icon(c.Icon().Color("DocBg").Scale(1).Src(&icons.ContentContentCopy)).
SetClick(
func() {
go func() {
if e := clipboard.WriteAll(fmt.Sprint(res)); E.Chk(e) {
}
}()
},
).Fn,
).Fn(gtx)
// return c.th.ButtonLayout(clk).Embed(c.th.Body2().Color(color).Fn).Fn(gtx)
},
),
)
case bool:
D.Ln("got type bool")
res := res.value.(bool)
out = append(
out,
c.jsonElement(
key, color, depth, func(gtx l.Context) l.Dimensions {
return c.Body2(fmt.Sprint(res)).Color(color).Fn(gtx)
},
),
)
}
case string:
D.Ln("got type string")
res := in.(string)
clk := c.Theme.WidgetPool.GetClickable()
out = append(
out,
c.jsonElement(
key, color, depth, func(gtx l.Context) l.Dimensions {
return c.Theme.Flex().
Rigid(c.Body2("\"" + res + "\"").Color(color).Fn).
Rigid(c.Inset(0.25, gel.EmptySpace(0, 0)).Fn).
Rigid(
c.IconButton(clk).
Background("").
ButtonInset(0).
Color(color).
Icon(c.Icon().Color("DocBg").Scale(1).Src(&icons.ContentContentCopy)).
SetClick(
func() {
go func() {
if e := clipboard.WriteAll(res); E.Chk(e) {
}
}()
},
).Fn,
).Fn(gtx)
},
),
)
case float64:
D.Ln("got type float64")
res := in.(float64)
clk := c.Theme.WidgetPool.GetClickable()
out = append(
out,
c.jsonElement(
key, color, depth, func(gtx l.Context) l.Dimensions {
return c.Theme.Flex().
Rigid(c.Body2(fmt.Sprint(res)).Color(color).Fn).
Rigid(c.Inset(0.25, gel.EmptySpace(0, 0)).Fn).
Rigid(
c.IconButton(clk).
Background("").
ButtonInset(0).
Color(color).
Icon(c.Icon().Color("DocBg").Scale(1).Src(&icons.ContentContentCopy)).
SetClick(
func() {
go func() {
if e := clipboard.WriteAll(fmt.Sprint(res)); E.Chk(e) {
}
}()
},
).Fn,
).Fn(gtx)
// return c.th.ButtonLayout(clk).Embed(c.th.Body2(fmt.Sprint(res)).Color(color).Fn).Fn(gtx)
},
),
)
case bool:
D.Ln("got type bool")
res := in.(bool)
out = append(
out,
c.jsonElement(
key, color, depth, func(gtx l.Context) l.Dimensions {
return c.Body2(fmt.Sprint(res)).Color(color).Fn(gtx)
},
),
)
default:
D.S(in)
}
return
}
func (c *Console) jsonElement(key, color string, depth int, w l.Widget) l.Widget {
return func(gtx l.Context) l.Dimensions {
return c.Theme.Flex().
Rigid(
c.getIndent(
depth, 1,
c.Body2(key).Font("bariol bold").Color(color).Fn,
),
).
Rigid(c.Inset(0.125, gel.EmptySpace(0, 0)).Fn).
Rigid(w).
Fn(gtx)
}
}

494
cmd/gui/createform.go Normal file
View File

@@ -0,0 +1,494 @@
package gui
import (
"encoding/hex"
"strings"
l "github.com/p9c/p9/pkg/gel/gio/layout"
"github.com/p9c/p9/pkg/gel/gio/text"
"github.com/tyler-smith/go-bip39"
"golang.org/x/exp/shiny/materialdesign/icons"
"github.com/p9c/p9/pkg/gel"
"github.com/p9c/p9/pkg/p9icons"
)
func (wg *WalletGUI) centered(w l.Widget) l.Widget {
return wg.Flex().
Flexed(0.5, gel.EmptyMaxWidth()).
Rigid(
wg.VFlex().
AlignMiddle().
Rigid(
w,
).
Fn,
).
Flexed(0.5, gel.EmptyMaxWidth()).
Fn
}
func (wg *WalletGUI) cwfLogoHeader() l.Widget {
return wg.centered(
wg.Icon().
Scale(gel.Scales["H2"]).
Color("DocText").
Src(&p9icons.ParallelCoin).Fn,
)
}
func (wg *WalletGUI) cwfHeader() l.Widget {
return wg.centered(
wg.H4("create new wallet").
Color("PanelText").
Fn,
)
}
func (wg *WalletGUI) cwfPasswordHeader() l.Widget {
return wg.H5("password").
Color("PanelText").
Fn
}
func (wg *WalletGUI) cwfShuffleButton() l.Widget {
return wg.ButtonLayout(
wg.clickables["createShuffle"].SetClick(
func() {
wg.ShuffleSeed()
wg.inputs["walletWords"].SetText("") // wg.createWords)
wg.inputs["walletWords"].SetText("") // wg.createWords)
wg.restoring = false
wg.createVerifying = false
},
),
).
CornerRadius(0).
Corners(0).
Background("Primary").
Embed(
wg.Inset(
0.25,
wg.Flex().AlignMiddle().
Rigid(
wg.Icon().
Scale(
gel.Scales["H6"],
).
Color("DocText").
Src(
&icons.NavigationRefresh,
).Fn,
).
Rigid(
wg.Body1("new").Color("DocText").Fn,
).
Fn,
).Fn,
).Fn
}
func (wg *WalletGUI) cwfRestoreButton() l.Widget {
return wg.ButtonLayout(
wg.clickables["createRestore"].SetClick(
func() {
D.Ln("clicked restore button")
if !wg.restoring {
wg.inputs["walletRestore"].SetText("")
wg.createMatch = ""
wg.restoring = true
wg.createVerifying = false
} else {
wg.createMatch = ""
wg.restoring = false
wg.createVerifying = false
}
},
),
).
CornerRadius(0).
Corners(0).
Background("Primary").
Embed(
wg.Inset(
0.25,
wg.Flex().AlignMiddle().
Rigid(
wg.Icon().
Scale(
gel.Scales["H6"],
).
Color("DocText").
Src(
&icons.ActionRestore,
).Fn,
).
Rigid(
wg.Body1("restore").Color("DocText").Fn,
).
Fn,
).Fn,
).Fn
}
func (wg *WalletGUI) cwfSetGenesis() l.Widget {
return func(gtx l.Context) l.Dimensions {
if !wg.bools["testnet"].GetValue() {
return l.Dimensions{}
} else {
return wg.ButtonLayout(
wg.clickables["genesis"].SetClick(
func() {
seedString := "f4d2c4c542bb52512ed9e6bbfa2d000e576a0c8b4ebd1acafd7efa37247366bc"
var e error
if wg.createSeed, e = hex.DecodeString(seedString); F.Chk(e) {
panic(e)
}
var wk string
if wk, e = bip39.NewMnemonic(wg.createSeed); E.Chk(e) {
panic(e)
}
wks := strings.Split(wk, " ")
var out string
for i := 0; i < 24; i += 4 {
out += strings.Join(wks[i:i+4], " ")
if i+4 < 24 {
out += "\n"
}
}
wg.showWords = out
wg.createWords = wk
wg.createMatch = wk
wg.inputs["walletWords"].SetText(wk)
wg.createVerifying = true
},
),
).
CornerRadius(0).
Corners(0).
Background("Primary").
Embed(
wg.Inset(
0.25,
wg.Flex().AlignMiddle().
Rigid(
wg.Icon().
Scale(
gel.Scales["H6"],
).
Color("DocText").
Src(
&icons.ActionOpenInNew,
).Fn,
).
Rigid(
wg.Body1("genesis").Color("DocText").Fn,
).
Fn,
).Fn,
).Fn(gtx)
}
}
}
func (wg *WalletGUI) cwfSetAutofill() l.Widget {
return func(gtx l.Context) l.Dimensions {
if !wg.bools["testnet"].GetValue() {
return l.Dimensions{}
} else {
return wg.ButtonLayout(
wg.clickables["autofill"].SetClick(
func() {
wk := wg.createWords
wg.createMatch = wk
wg.inputs["walletWords"].SetText(wk)
wg.createVerifying = true
},
),
).
CornerRadius(0).
Corners(0).
Background("Primary").
Embed(
wg.Inset(
0.25,
wg.Flex().AlignMiddle().
Rigid(
wg.Icon().
Scale(
gel.Scales["H6"],
).
Color("DocText").
Src(
&icons.ActionOpenInNew,
).Fn,
).
Rigid(
wg.Body1("autofill").Color("DocText").Fn,
).
Fn,
).Fn,
).Fn(gtx)
}
}
}
func (wg *WalletGUI) cwfSeedHeader() l.Widget {
return wg.Flex(). //AlignMiddle().
Rigid(
wg.Inset(
0.25,
wg.H5("seed").
Color("PanelText").
Fn,
).Fn,
).
Rigid(wg.Inset(0.25, gel.EmptySpace(0, 0)).Fn).
Rigid(wg.cwfShuffleButton()).
Rigid(wg.Inset(0.25, gel.EmptySpace(0, 0)).Fn).
Rigid(wg.cwfRestoreButton()).
Rigid(wg.Inset(0.25, gel.EmptySpace(0, 0)).Fn).
Rigid(wg.cwfSetGenesis()).
Rigid(wg.Inset(0.25, gel.EmptySpace(0, 0)).Fn).
Rigid(wg.cwfSetAutofill()).
Fn
}
func (wg *WalletGUI) cfwWords() (w l.Widget) {
if !wg.createVerifying {
col := "DocText"
if wg.createWords == wg.createMatch {
col = "Success"
}
return wg.Flex().
Rigid(
wg.ButtonLayout(
wg.clickables["createVerify"].SetClick(
func() {
wg.createVerifying = true
},
),
).Background("Transparent").Embed(
wg.VFlex().
Rigid(
wg.Caption("Write the following words down, then click to re-enter and verify transcription").
Color("PanelText").
Fn,
).
Rigid(
wg.Flex().Flexed(
1,
wg.Body1(wg.showWords).Alignment(text.Middle).Color(col).Fn,
).Fn,
).Fn,
).Fn,
).
Fn
}
return nil
}
func (wg *WalletGUI) cfwWordsVerify() (w l.Widget) {
if wg.createVerifying {
verifyState := wg.Button(
wg.clickables["createVerify"].SetClick(
func() {
wg.createVerifying = false
},
),
).Text("back").Fn
if wg.createWords == wg.createMatch {
verifyState = wg.Inset(0.25, wg.Body1("match").Color("Success").Fn).Fn
}
return wg.Flex().
Rigid(
verifyState,
).
Rigid(
wg.inputs["walletWords"].Fn,
).
Fn
}
return nil
}
func (wg *WalletGUI) cfwRestore() (w l.Widget) {
w = func(l.Context) l.Dimensions {
return l.Dimensions{}
}
if wg.restoring {
// restoreState := wg.Button(
// wg.clickables["createRestore"].SetClick(
// func() {
// wg.restoring = false
// },
// ),
// ).Text("back").Fn
if wg.createWords == wg.createMatch {
w = wg.Flex().AlignMiddle().
Rigid(
wg.Inset(0.25, wg.H5("valid").Color("Success").Fn).Fn,
).Fn
}
return wg.Flex().
Rigid(
w,
).
Rigid(
wg.inputs["walletRestore"].Fn,
).
Fn
}
return
}
func (wg *WalletGUI) cwfTestnetSettings() (out l.Widget) {
return wg.Flex().
Rigid(
func(gtx l.Context) l.Dimensions {
return wg.CheckBox(
wg.bools["testnet"].SetOnChange(
func(b bool) {
if !b {
wg.bools["solo"].Value(false)
wg.bools["lan"].Value(false)
// wg.cx.Config.MulticastPass.Set("pa55word")
wg.cx.Config.Solo.F()
wg.cx.Config.LAN.F()
wg.ShuffleSeed()
wg.createVerifying = false
wg.inputs["walletWords"].SetText("")
wg.Invalidate()
}
wg.createWalletTestnetToggle(b)
},
),
).
IconColor("Primary").
TextColor("DocText").
Text("Use Testnet").
Fn(gtx)
},
).
Rigid(
func(gtx l.Context) l.Dimensions {
checkColor, textColor := "Primary", "DocText"
if !wg.bools["testnet"].GetValue() {
gtx = gtx.Disabled()
checkColor, textColor = "scrim", "scrim"
}
return wg.CheckBox(
wg.bools["lan"].SetOnChange(
func(b bool) {
D.Ln("lan now set to", b)
wg.cx.Config.LAN.Set(b)
if b && wg.cx.Config.Solo.True() {
wg.cx.Config.Solo.F()
wg.cx.Config.DisableDNSSeed.T()
wg.cx.Config.AutoListen.F()
wg.bools["solo"].Value(false)
// wg.cx.Config.MulticastPass.Set("pa55word")
wg.Invalidate()
} else {
wg.cx.Config.Solo.F()
wg.cx.Config.DisableDNSSeed.F()
// wg.cx.Config.MulticastPass.Set("pa55word")
wg.cx.Config.AutoListen.T()
}
_ = wg.cx.Config.WriteToFile(wg.cx.Config.ConfigFile.V())
},
),
).
IconColor(checkColor).
TextColor(textColor).
Text("LAN only").
Fn(gtx)
},
).
Rigid(
func(gtx l.Context) l.Dimensions {
checkColor, textColor := "Primary", "DocText"
if !wg.bools["testnet"].GetValue() {
gtx = gtx.Disabled()
checkColor, textColor = "scrim", "scrim"
}
return wg.CheckBox(
wg.bools["solo"].SetOnChange(
func(b bool) {
D.Ln("solo now set to", b)
wg.cx.Config.Solo.Set(b)
if b && wg.cx.Config.LAN.True() {
wg.cx.Config.LAN.F()
wg.cx.Config.DisableDNSSeed.T()
wg.cx.Config.AutoListen.F()
// wg.cx.Config.MulticastPass.Set("pa55word")
wg.bools["lan"].Value(false)
wg.Invalidate()
} else {
wg.cx.Config.LAN.F()
wg.cx.Config.DisableDNSSeed.F()
// wg.cx.Config.MulticastPass.Set("pa55word")
wg.cx.Config.AutoListen.T()
}
_ = wg.cx.Config.WriteToFile(wg.cx.Config.ConfigFile.V())
},
),
).
IconColor(checkColor).
TextColor(textColor).
Text("Solo (mine without peers)").
Fn(gtx)
},
).
Fn
}
func (wg *WalletGUI) cwfConfirmation() (out l.Widget) {
return wg.CheckBox(
wg.bools["ihaveread"].SetOnChange(
func(b bool) {
D.Ln("confirmed read", b)
// if the password has been entered, we need to copy it to the variable
if wg.createWalletPasswordsMatch() {
wg.cx.Config.WalletPass.Set(wg.passwords["confirmPassEditor"].GetPassword())
}
},
),
).
IconColor("Primary").
TextColor("DocText").
Text(
"I have stored the seed and password safely " +
"and understand it cannot be recovered",
).
Fn
}
func (wg *WalletGUI) createWalletFormWidgets() (out []l.Widget) {
out = append(
out,
wg.cwfLogoHeader(),
wg.cwfHeader(),
wg.cwfPasswordHeader(),
wg.passwords["passEditor"].
Fn,
wg.passwords["confirmPassEditor"].
Fn,
wg.cwfSeedHeader(),
)
if wg.createVerifying {
out = append(
out, wg.cfwWordsVerify(),
)
} else if wg.restoring {
out = append(out, wg.cfwRestore())
} else {
out = append(out, wg.cfwWords())
}
out = append(
out,
wg.cwfTestnetSettings(),
wg.cwfConfirmation(),
)
return
}

298
cmd/gui/createwallet.go Normal file
View File

@@ -0,0 +1,298 @@
package gui
import (
"fmt"
"os"
"path/filepath"
"time"
"github.com/p9c/p9/pkg/qu"
"golang.org/x/exp/shiny/materialdesign/icons"
"github.com/p9c/p9/pkg/interrupt"
"github.com/p9c/p9/pkg/gel"
"github.com/p9c/p9/cmd/wallet"
"github.com/p9c/p9/pkg/chaincfg"
"github.com/p9c/p9/pkg/constant"
"github.com/p9c/p9/pkg/fork"
l "github.com/p9c/p9/pkg/gel/gio/layout"
)
const slash = string(os.PathSeparator)
func (wg *WalletGUI) CreateWalletPage(gtx l.Context) l.Dimensions {
walletForm := wg.createWalletFormWidgets()
le := func(gtx l.Context, index int) l.Dimensions {
return wg.Inset(0.25, walletForm[index]).Fn(gtx)
}
return func(gtx l.Context) l.Dimensions {
return wg.Fill(
"DocBg", l.Center, 0, 0,
// wg.Inset(
// 0.5,
wg.VFlex().
Flexed(
1,
wg.lists["createWallet"].Vertical().Start().Length(len(walletForm)).ListElement(le).Fn,
).
Rigid(
wg.createConfirmExitBar(),
).Fn,
// ).Fn,
).Fn(gtx)
}(gtx)
}
func (wg *WalletGUI) createConfirmExitBar() l.Widget {
return wg.VFlex().
// Rigid(
// wg.Inset(
// ,
// ).Fn,
// ).
Rigid(
wg.Inset(0.5,
wg.Flex().
Rigid(
func(gtx l.Context) l.Dimensions {
return wg.Flex().
Rigid(
wg.ButtonLayout(
wg.clickables["quit"].SetClick(
func() {
interrupt.Request()
},
),
).
CornerRadius(0.5).
Corners(0).
Background("PanelBg").
Embed(
wg.Inset(
0.25,
wg.Flex().AlignMiddle().
Rigid(
wg.Icon().
Scale(
gel.Scales["H4"],
).
Color("DocText").
Src(
&icons.
MapsDirectionsRun,
).Fn,
).
Rigid(
wg.Inset(
0.5,
gel.EmptySpace(
0,
0,
),
).Fn,
).
Rigid(
wg.H6("exit").Color("DocText").Fn,
).
Rigid(
wg.Inset(
0.5,
gel.EmptySpace(
0,
0,
),
).Fn,
).
Fn,
).Fn,
).Fn,
).
Fn(gtx)
},
).
Flexed(
1,
gel.EmptyMaxWidth(),
).
Rigid(
func(gtx l.Context) l.Dimensions {
if !wg.createWalletInputsAreValid() {
gtx = gtx.Disabled()
}
return wg.Flex().
Rigid(
wg.ButtonLayout(
wg.clickables["createWallet"].SetClick(
func() {
go wg.createWalletAction()
},
),
).
CornerRadius(0).
Corners(0).
Background("Primary").
Embed(
// wg.Fill("DocText",
wg.Inset(
0.25,
wg.Flex().AlignMiddle().
Rigid(
wg.Icon().
Scale(
gel.Scales["H4"],
).
Color("DocText").
Src(
&icons.
ContentCreate,
).Fn,
).
Rigid(
wg.Inset(
0.5,
gel.EmptySpace(
0,
0,
),
).Fn,
).
Rigid(
wg.H6("create").Color("DocText").Fn,
).
Rigid(
wg.Inset(
0.5,
gel.EmptySpace(
0,
0,
),
).Fn,
).
Fn,
).Fn,
).Fn,
).
Fn(gtx)
},
).
Fn,
).Fn,
).
Fn
}
func (wg *WalletGUI) createWalletPasswordsMatch() bool {
return wg.passwords["passEditor"].GetPassword() != "" &&
wg.passwords["confirmPassEditor"].GetPassword() != "" &&
len(wg.passwords["passEditor"].GetPassword()) >= 8 &&
wg.passwords["passEditor"].GetPassword() ==
wg.passwords["confirmPassEditor"].GetPassword()
}
func (wg *WalletGUI) createWalletInputsAreValid() bool {
return wg.createWalletPasswordsMatch() && wg.bools["ihaveread"].GetValue() && wg.createWords == wg.createMatch
}
func (wg *WalletGUI) createWalletAction() {
// wg.NodeRunCommandChan <- "stop"
D.Ln("clicked submit wallet")
wg.cx.Config.WalletFile.Set(filepath.Join(wg.cx.Config.DataDir.V(), wg.cx.ActiveNet.Name, constant.DbName))
dbDir := wg.cx.Config.WalletFile.V()
loader := wallet.NewLoader(wg.cx.ActiveNet, dbDir, 250)
// seed, _ := hex.DecodeString(wg.inputs["walletSeed"].GetText())
seed := wg.createSeed
pass := wg.passwords["passEditor"].GetPassword()
wg.cx.Config.WalletPass.Set(pass)
D.Ln("password", pass)
_ = wg.cx.Config.WriteToFile(wg.cx.Config.ConfigFile.V())
w, e := loader.CreateNewWallet(
[]byte(pass),
[]byte(pass),
seed,
time.Now(),
false,
wg.cx.Config,
qu.T(),
)
D.Ln("*** created wallet")
if E.Chk(e) {
// return
}
w.Stop()
D.Ln("shutting down wallet", w.ShuttingDown())
w.WaitForShutdown()
D.Ln("starting main app")
wg.cx.Config.Generate.T()
wg.cx.Config.GenThreads.Set(1)
wg.cx.Config.NodeOff.F()
wg.cx.Config.WalletOff.F()
_ = wg.cx.Config.WriteToFile(wg.cx.Config.ConfigFile.V())
// // we are going to assume the config is not manually misedited
// if apputil.FileExists(*wg.cx.Config.ConfigFile) {
// b, e := ioutil.ReadFile(*wg.cx.Config.ConfigFile)
// if e == nil {
// wg.cx.Config, wg.cx.ConfigMap = pod.EmptyConfig()
// e = json.Unmarshal(b, wg.cx.Config)
// if e != nil {
// E.Ln("error unmarshalling config", e)
// // os.Exit(1)
// panic(e)
// }
// } else {
// F.Ln("unexpected error reading configuration file:", e)
// // os.Exit(1)
// // return e
// panic(e)
// }
// }
*wg.noWallet = false
// interrupt.Request()
// wg.wallet.Stop()
// wg.wallet.Start()
// wg.node.Start()
// wg.miner.Start()
wg.unlockPassword.Editor().SetText(pass)
wg.unlockWallet(pass)
interrupt.RequestRestart()
}
func (wg *WalletGUI) createWalletTestnetToggle(b bool) {
D.Ln("testnet on?", b)
// if the password has been entered, we need to copy it to the variable
if wg.passwords["passEditor"].GetPassword() != "" ||
wg.passwords["confirmPassEditor"].GetPassword() != "" ||
len(wg.passwords["passEditor"].GetPassword()) >= 8 ||
wg.passwords["passEditor"].GetPassword() ==
wg.passwords["confirmPassEditor"].GetPassword() {
wg.cx.Config.WalletPass.Set(wg.passwords["confirmPassEditor"].GetPassword())
D.Ln("wallet pass", wg.cx.Config.WalletPass.V())
}
if b {
wg.cx.ActiveNet = &chaincfg.TestNet3Params
fork.IsTestnet = true
} else {
wg.cx.ActiveNet = &chaincfg.MainNetParams
fork.IsTestnet = false
}
I.Ln("activenet:", wg.cx.ActiveNet.Name)
D.Ln("setting ports to match network")
wg.cx.Config.Network.Set(wg.cx.ActiveNet.Name)
wg.cx.Config.P2PListeners.Set(
[]string{"0.0.0.0:" + wg.cx.ActiveNet.DefaultPort},
)
wg.cx.Config.P2PConnect.Set([]string{"127.0.0.1:" + wg.cx.ActiveNet.
DefaultPort})
address := fmt.Sprintf(
"127.0.0.1:%s",
wg.cx.ActiveNet.RPCClientPort,
)
wg.cx.Config.RPCListeners.Set([]string{address})
wg.cx.Config.RPCConnect.Set(address)
address = fmt.Sprintf("127.0.0.1:" + wg.cx.ActiveNet.WalletRPCServerPort)
wg.cx.Config.WalletRPCListeners.Set([]string{address})
wg.cx.Config.WalletServer.Set(address)
wg.cx.Config.NodeOff.F()
_ = wg.cx.Config.WriteToFile(wg.cx.Config.ConfigFile.V())
}

62
cmd/gui/debug.go Normal file
View File

@@ -0,0 +1,62 @@
package gui
// func (wg *WalletGUI) goRoutines() {
// var e error
// if wg.App.ActivePageGet() == "goroutines" || wg.unlockPage.ActivePageGet() == "goroutines" {
// D.Ln("updating goroutines data")
// var b []byte
// buf := bytes.NewBuffer(b)
// if e = pprof.Lookup("goroutine").WriteTo(buf, 2); E.Chk(e) {
// }
// lines := strings.Split(buf.String(), "\n")
// var out []l.Widget
// var clickables []*p9.Clickable
// for x := range lines {
// i := x
// clickables = append(clickables, wg.Clickable())
// var text string
// if strings.HasPrefix(lines[i], "goroutine") && i < len(lines)-2 {
// text = lines[i+2]
// text = strings.TrimSpace(strings.Split(text, " ")[0])
// // outString += text + "\n"
// out = append(
// out, func(gtx l.Context) l.Dimensions {
// return wg.ButtonLayout(clickables[i]).Embed(
// wg.ButtonInset(
// 0.25,
// wg.Caption(text).
// Color("DocText").Fn,
// ).Fn,
// ).Background("Transparent").SetClick(
// func() {
// go func() {
// out := make([]string, 2)
// split := strings.Split(text, ":")
// if len(split) > 2 {
// out[0] = strings.Join(split[:len(split)-1], ":")
// out[1] = split[len(split)-1]
// } else {
// out[0] = split[0]
// out[1] = split[1]
// }
// D.Ln("path", out[0], "line", out[1])
// goland := "goland64.exe"
// if runtime.GOOS != "windows" {
// goland = "goland"
// }
// launch := exec.Command(goland, "--line", out[1], out[0])
// if e = launch.Start(); E.Chk(e) {
// }
// }()
// },
// ).
// Fn(gtx)
// },
// )
// }
// }
// // D.Ln(outString)
// wg.State.SetGoroutines(out)
// wg.invalidate <- struct{}{}
// }
// }

728
cmd/gui/events.go Normal file
View File

@@ -0,0 +1,728 @@
package gui
import (
"encoding/json"
"io/ioutil"
"time"
"github.com/p9c/p9/pkg/amt"
"github.com/p9c/p9/pkg/chainrpc/p2padvt"
"github.com/p9c/p9/pkg/transport"
"github.com/p9c/p9/pkg/wire"
"github.com/p9c/p9/pkg/btcjson"
"github.com/p9c/p9/pkg/chainhash"
"github.com/p9c/p9/pkg/rpcclient"
"github.com/p9c/p9/pkg/util"
)
func (wg *WalletGUI) WalletAndClientRunning() bool {
running := wg.wallet.Running() && wg.WalletClient != nil && !wg.WalletClient.Disconnected()
// D.Ln("wallet and wallet rpc client are running?", running)
return running
}
func (wg *WalletGUI) Advertise() (e error) {
if wg.node.Running() && wg.cx.Config.Discovery.True() {
// I.Ln("sending out p2p advertisment")
if e = wg.multiConn.SendMany(
p2padvt.Magic,
transport.GetShards(p2padvt.Get(uint64(wg.cx.Config.UUID.V()), (wg.cx.Config.P2PListeners.S())[0])),
); E.Chk(e) {
}
}
return
}
// func (wg *WalletGUI) Tickers() {
// first := true
// D.Ln("updating best block")
// var e error
// var height int32
// var h *chainhash.Hash
// if h, height, e = wg.ChainClient.GetBestBlock(); E.Chk(e) {
// // interrupt.Request()
// return
// }
// D.Ln(h, height)
// wg.State.SetBestBlockHeight(height)
// wg.State.SetBestBlockHash(h)
// go func() {
// var e error
// seconds := time.Tick(time.Second * 3)
// fiveSeconds := time.Tick(time.Second * 5)
// totalOut:
// for {
// preconnect:
// for {
// select {
// case <-seconds:
// D.Ln("preconnect loop")
// if e = wg.Advertise(); D.Chk(e) {
// }
// if wg.ChainClient != nil {
// wg.ChainClient.Disconnect()
// wg.ChainClient.Shutdown()
// wg.ChainClient = nil
// }
// if wg.WalletClient != nil {
// wg.WalletClient.Disconnect()
// wg.WalletClient.Shutdown()
// wg.WalletClient = nil
// }
// if !wg.node.Running() {
// break
// }
// break preconnect
// case <-fiveSeconds:
// continue
// case <-wg.quit.Wait():
// break totalOut
// }
// }
// out:
// for {
// select {
// case <-seconds:
// if e = wg.Advertise(); D.Chk(e) {
// }
// if !wg.cx.IsCurrent() {
// continue
// } else {
// wg.cx.Syncing.Store(false)
// }
// D.Ln("---------------------- ready", wg.ready.Load())
// D.Ln("---------------------- WalletAndClientRunning", wg.WalletAndClientRunning())
// D.Ln("---------------------- stateLoaded", wg.stateLoaded.Load())
// wg.node.Start()
// if e = wg.writeWalletCookie(); E.Chk(e) {
// }
// wg.wallet.Start()
// D.Ln("connecting to chain")
// if e = wg.chainClient(); E.Chk(e) {
// break
// }
// if wg.wallet.Running() { // && wg.WalletClient == nil {
// D.Ln("connecting to wallet")
// if e = wg.walletClient(); E.Chk(e) {
// break
// }
// }
// if !wg.node.Running() {
// D.Ln("breaking out node not running")
// break out
// }
// if wg.ChainClient == nil {
// D.Ln("breaking out chainclient is nil")
// break out
// }
// // if wg.WalletClient == nil {
// // D.Ln("breaking out walletclient is nil")
// // break out
// // }
// if wg.ChainClient.Disconnected() {
// D.Ln("breaking out chainclient disconnected")
// break out
// }
// // if wg.WalletClient.Disconnected() {
// // D.Ln("breaking out walletclient disconnected")
// // break out
// // }
// // var e error
// if first {
// wg.updateChainBlock()
// wg.invalidate <- struct{}{}
// }
//
// if wg.WalletAndClientRunning() {
// if first {
// wg.processWalletBlockNotification()
// }
// // if wg.stateLoaded.Load() { // || wg.currentReceiveGetNew.Load() {
// // wg.ReceiveAddressbook = func(gtx l.Context) l.Dimensions {
// // var widgets []l.Widget
// // widgets = append(widgets, wg.ReceivePage.GetAddressbookHistoryCards("DocBg")...)
// // le := func(gtx l.Context, index int) l.Dimensions {
// // return widgets[index](gtx)
// // }
// // return wg.Flex().Rigid(
// // wg.lists["receiveAddresses"].Length(len(widgets)).Vertical().
// // ListElement(le).Fn,
// // ).Fn(gtx)
// // }
// // }
// if wg.stateLoaded.Load() && !wg.State.IsReceivingAddress() { // || wg.currentReceiveGetNew.Load() {
// wg.GetNewReceivingAddress()
// if wg.currentReceiveQRCode == nil || wg.currentReceiveRegenerate.Load() { // || wg.currentReceiveGetNew.Load() {
// wg.GetNewReceivingQRCode(wg.ReceivePage.urn)
// }
// }
// }
// wg.invalidate <- struct{}{}
// first = false
// case <-fiveSeconds:
// case <-wg.quit.Wait():
// break totalOut
// }
// }
// }
// }()
// }
func (wg *WalletGUI) updateThingies() (e error) {
// update the configuration
var b []byte
if b, e = ioutil.ReadFile(wg.cx.Config.ConfigFile.V()); !E.Chk(e) {
if e = json.Unmarshal(b, wg.cx.Config); !E.Chk(e) {
return
}
}
return
}
func (wg *WalletGUI) updateChainBlock() {
D.Ln("processChainBlockNotification")
var e error
if wg.ChainClient != nil && wg.ChainClient.Disconnected() || wg.ChainClient.Disconnected() {
D.Ln("connecting ChainClient")
if e = wg.chainClient(); E.Chk(e) {
return
}
}
var h *chainhash.Hash
var height int32
D.Ln("updating best block")
if h, height, e = wg.ChainClient.GetBestBlock(); E.Chk(e) {
// interrupt.Request()
return
}
D.Ln(h, height)
wg.State.SetBestBlockHeight(height)
wg.State.SetBestBlockHash(h)
}
func (wg *WalletGUI) processChainBlockNotification(hash *chainhash.Hash, height int32, t time.Time) {
D.Ln("processChainBlockNotification")
wg.State.SetBestBlockHeight(height)
wg.State.SetBestBlockHash(hash)
// if wg.WalletAndClientRunning() {
// wg.processWalletBlockNotification()
// }
}
func (wg *WalletGUI) processWalletBlockNotification() bool {
D.Ln("processWalletBlockNotification")
if !wg.WalletAndClientRunning() {
D.Ln("wallet and client not running")
return false
}
// check account balance
var unconfirmed amt.Amount
var e error
if unconfirmed, e = wg.WalletClient.GetUnconfirmedBalance("default"); E.Chk(e) {
return false
}
wg.State.SetBalanceUnconfirmed(unconfirmed.ToDUO())
var confirmed amt.Amount
if confirmed, e = wg.WalletClient.GetBalance("default"); E.Chk(e) {
return false
}
wg.State.SetBalance(confirmed.ToDUO())
var atr []btcjson.ListTransactionsResult
// str := wg.State.allTxs.Load()
if atr, e = wg.WalletClient.ListTransactionsCount("default", 2<<32); E.Chk(e) {
return false
}
// D.Ln(len(atr))
// wg.State.SetAllTxs(append(str, atr...))
wg.State.SetAllTxs(atr)
wg.txMx.Lock()
wg.txHistoryList = wg.State.filteredTxs.Load()
atrl := 10
if len(atr) < atrl {
atrl = len(atr)
}
wg.txMx.Unlock()
wg.RecentTransactions(10, "recent")
wg.RecentTransactions(-1, "history")
return true
}
func (wg *WalletGUI) forceUpdateChain() {
wg.updateChainBlock()
var e error
var height int32
var tip *chainhash.Hash
if tip, height, e = wg.ChainClient.GetBestBlock(); E.Chk(e) {
return
}
var block *wire.Block
if block, e = wg.ChainClient.GetBlock(tip); E.Chk(e) {
}
t := block.Header.Timestamp
wg.processChainBlockNotification(tip, height, t)
}
func (wg *WalletGUI) ChainNotifications() *rpcclient.NotificationHandlers {
return &rpcclient.NotificationHandlers{
// OnClientConnected: func() {
// // go func() {
// D.Ln("(((NOTIFICATION))) CHAIN CLIENT CONNECTED!")
// wg.cx.Syncing.Store(true)
// wg.forceUpdateChain()
// wg.processWalletBlockNotification()
// wg.RecentTransactions(10, "recent")
// wg.RecentTransactions(-1, "history")
// wg.invalidate <- struct{}{}
// wg.cx.Syncing.Store(false)
// },
// OnBlockConnected: func(hash *chainhash.Hash, height int32, t time.Time) {
// if wg.cx.Syncing.Load() {
// return
// }
// D.Ln("(((NOTIFICATION))) chain OnBlockConnected", hash, height, t)
// wg.processChainBlockNotification(hash, height, t)
// // wg.processWalletBlockNotification()
// // todo: send system notification of new block, set configuration to disable also
// // if wg.WalletAndClientRunning() {
// // var e error
// // if _, e = wg.WalletClient.RescanBlocks([]chainhash.Hash{*hash}); E.Chk(e) {
// // }
// // }
// wg.RecentTransactions(10, "recent")
// wg.RecentTransactions(-1, "history")
// wg.invalidate <- struct{}{}
// },
OnFilteredBlockConnected: func(height int32, header *wire.BlockHeader, txs []*util.Tx) {
nbh := header.BlockHash()
wg.processChainBlockNotification(&nbh, height, header.Timestamp)
// if time.Now().Sub(time.Unix(wg.lastUpdated.Load(), 0)) < time.Second {
// return
// }
wg.lastUpdated.Store(time.Now().Unix())
hash := header.BlockHash()
D.Ln(
"(((NOTIFICATION))) OnFilteredBlockConnected hash", hash, "POW hash:",
header.BlockHashWithAlgos(height), "height", height,
)
// D.S(txs)
if wg.processWalletBlockNotification() {
}
// filename := filepath.Join(*wg.cx.Config.DataDir, "state.json")
// if e := wg.State.Save(filename, wg.cx.Config.WalletPass); E.Chk(e) {
// }
// if wg.WalletAndClientRunning() {
// var e error
// if _, e = wg.WalletClient.RescanBlocks([]chainhash.Hash{hash}); E.Chk(e) {
// }
// }
wg.RecentTransactions(10, "recent")
wg.RecentTransactions(-1, "history")
wg.invalidate <- struct{}{}
},
// OnBlockDisconnected: func(hash *chainhash.Hash, height int32, t time.Time) {
// if wg.cx.Syncing.Load() {
// return
// }
// D.Ln("(((NOTIFICATION))) OnBlockDisconnected", hash, height, t)
// wg.forceUpdateChain()
// if wg.processWalletBlockNotification() {
// }
// wg.RecentTransactions(10, "recent")
// wg.RecentTransactions(-1, "history")
// wg.invalidate <- struct{}{}
// },
// OnFilteredBlockDisconnected: func(height int32, header *wire.BlockHeader) {
// if wg.cx.Syncing.Load() {
// return
// }
// D.Ln("(((NOTIFICATION))) OnFilteredBlockDisconnected", height, header)
// wg.forceUpdateChain()
// if wg.processWalletBlockNotification() {
// }
// wg.RecentTransactions(10, "recent")
// wg.RecentTransactions(-1, "history")
// wg.invalidate <- struct{}{}
// },
// OnRecvTx: func(transaction *util.Tx, details *btcjson.BlockDetails) {
// if wg.cx.Syncing.Load() {
// return
// }
// D.Ln("(((NOTIFICATION))) OnRecvTx", transaction, details)
// wg.forceUpdateChain()
// if wg.processWalletBlockNotification() {
// }
// wg.RecentTransactions(10, "recent")
// wg.RecentTransactions(-1, "history")
// wg.invalidate <- struct{}{}
// },
// OnRedeemingTx: func(transaction *util.Tx, details *btcjson.BlockDetails) {
// if wg.cx.Syncing.Load() {
// return
// }
// D.Ln("(((NOTIFICATION))) OnRedeemingTx", transaction, details)
// wg.forceUpdateChain()
// if wg.processWalletBlockNotification() {
// }
// wg.RecentTransactions(10, "recent")
// wg.RecentTransactions(-1, "history")
// wg.invalidate <- struct{}{}
// },
// OnRelevantTxAccepted: func(transaction []byte) {
// if wg.cx.Syncing.Load() {
// return
// }
// D.Ln("(((NOTIFICATION))) OnRelevantTxAccepted", transaction)
// wg.forceUpdateChain()
// if wg.processWalletBlockNotification() {
// }
// wg.RecentTransactions(10, "recent")
// wg.RecentTransactions(-1, "history")
// wg.invalidate <- struct{}{}
// },
// OnRescanFinished: func(hash *chainhash.Hash, height int32, blkTime time.Time) {
// if wg.cx.Syncing.Load() {
// return
// }
// D.Ln("(((NOTIFICATION))) OnRescanFinished", hash, height, blkTime)
// wg.processChainBlockNotification(hash, height, blkTime)
// // update best block height
// // wg.processWalletBlockNotification()
// // stop showing syncing indicator
// if wg.processWalletBlockNotification() {
// }
// wg.RecentTransactions(10, "recent")
// wg.RecentTransactions(-1, "history")
// wg.invalidate <- struct{}{}
// },
// OnRescanProgress: func(hash *chainhash.Hash, height int32, blkTime time.Time) {
// D.Ln("(((NOTIFICATION))) OnRescanProgress", hash, height, blkTime)
// // update best block height
// // wg.processWalletBlockNotification()
// // set to show syncing indicator
// if wg.processWalletBlockNotification() {
// }
// wg.Syncing.Store(true)
// wg.RecentTransactions(10, "recent")
// wg.RecentTransactions(-1, "history")
// wg.invalidate <- struct{}{}
// },
OnTxAccepted: func(hash *chainhash.Hash, amount amt.Amount) {
// if wg.syncing.Load() {
// D.Ln("OnTxAccepted but we are syncing")
// return
// }
D.Ln("(((NOTIFICATION))) OnTxAccepted")
D.Ln(hash, amount)
// if wg.processWalletBlockNotification() {
// }
wg.RecentTransactions(10, "recent")
wg.RecentTransactions(-1, "history")
wg.invalidate <- struct{}{}
},
// OnTxAcceptedVerbose: func(txDetails *btcjson.TxRawResult) {
// if wg.cx.Syncing.Load() {
// return
// }
// D.Ln("(((NOTIFICATION))) OnTxAcceptedVerbose")
// D.S(txDetails)
// if wg.processWalletBlockNotification() {
// }
// wg.RecentTransactions(10, "recent")
// wg.RecentTransactions(-1, "history")
// wg.invalidate <- struct{}{}
// },
// OnPodConnected: func(connected bool) {
// if wg.cx.Syncing.Load() {
// return
// }
// D.Ln("(((NOTIFICATION))) OnPodConnected", connected)
// wg.forceUpdateChain()
// if wg.processWalletBlockNotification() {
// }
// wg.RecentTransactions(10, "recent")
// wg.RecentTransactions(-1, "history")
// wg.invalidate <- struct{}{}
// },
// OnAccountBalance: func(account string, balance util.Amount, confirmed bool) {
// if wg.cx.Syncing.Load() {
// return
// }
// D.Ln("OnAccountBalance")
// // what does this actually do
// D.Ln(account, balance, confirmed)
// },
// OnWalletLockState: func(locked bool) {
// if wg.cx.Syncing.Load() {
// return
// }
// D.Ln("OnWalletLockState", locked)
// // switch interface to unlock page
// wg.forceUpdateChain()
// if wg.processWalletBlockNotification() {
// }
// // TODO: lock when idle... how to get trigger for idleness in UI?
// },
// OnUnknownNotification: func(method string, params []json.RawMessage) {
// if wg.cx.Syncing.Load() {
// return
// }
// D.Ln("(((NOTIFICATION))) OnUnknownNotification", method, params)
// wg.forceUpdateChain()
// if wg.processWalletBlockNotification() {
// }
// },
}
}
func (wg *WalletGUI) WalletNotifications() *rpcclient.NotificationHandlers {
// if !wg.wallet.Running() || wg.WalletClient == nil || wg.WalletClient.Disconnected() {
// return nil
// }
// var updating bool
return &rpcclient.NotificationHandlers{
// OnClientConnected: func() {
// if wg.cx.Syncing.Load() {
// return
// }
// if updating {
// return
// }
// D.Ln("(((NOTIFICATION))) wallet client connected, running initial processes")
// for !wg.processWalletBlockNotification() {
// time.Sleep(time.Second)
// D.Ln("(((NOTIFICATION))) retry attempting to update wallet transactions")
// }
// filename := filepath.Join(wg.cx.DataDir, "state.json")
// if e := wg.State.Save(filename, wg.cx.Config.WalletPass); E.Chk(e) {
// }
// wg.invalidate <- struct{}{}
// updating = false
// },
// OnBlockConnected: func(hash *chainhash.Hash, height int32, t time.Time) {
// if wg.Syncing.Load() {
// return
// }
// D.Ln("(((NOTIFICATION))) wallet OnBlockConnected", hash, height, t)
// wg.processWalletBlockNotification()
// filename := filepath.Join(wg.cx.DataDir, "state.json")
// if e := wg.State.Save(filename, wg.cx.Config.WalletPass); E.Chk(e) {
// }
// wg.invalidate <- struct{}{}
// },
// OnFilteredBlockConnected: func(height int32, header *wire.BlockHeader, txs []*util.Tx) {
// if wg.Syncing.Load() {
// return
// }
// D.Ln(
// "(((NOTIFICATION))) wallet OnFilteredBlockConnected hash", header.BlockHash(), "POW hash:",
// header.BlockHashWithAlgos(height), "height", height,
// )
// // D.S(txs)
// nbh := header.BlockHash()
// wg.processChainBlockNotification(&nbh, height, header.Timestamp)
// if wg.processWalletBlockNotification() {
// }
// filename := filepath.Join(wg.cx.DataDir, "state.json")
// if e := wg.State.Save(filename, wg.cx.Config.WalletPass); E.Chk(e) {
// }
// wg.invalidate <- struct{}{}
// },
// OnBlockDisconnected: func(hash *chainhash.Hash, height int32, t time.Time) {
// if wg.Syncing.Load() {
// return
// }
// D.Ln("(((NOTIFICATION))) OnBlockDisconnected", hash, height, t)
// wg.forceUpdateChain()
// if wg.processWalletBlockNotification() {
// }
// },
// OnFilteredBlockDisconnected: func(height int32, header *wire.BlockHeader) {
// if wg.Syncing.Load() {
// return
// }
// D.Ln("(((NOTIFICATION))) OnFilteredBlockDisconnected", height, header)
// wg.forceUpdateChain()
// if wg.processWalletBlockNotification() {
// }
// },
// OnRecvTx: func(transaction *util.Tx, details *btcjson.BlockDetails) {
// if wg.cx.Syncing.Load() {
// return
// }
// D.Ln("(((NOTIFICATION))) OnRecvTx", transaction, details)
// wg.forceUpdateChain()
// if wg.processWalletBlockNotification() {
// }
// },
// OnRedeemingTx: func(transaction *util.Tx, details *btcjson.BlockDetails) {
// if wg.cx.Syncing.Load() {
// return
// }
// D.Ln("(((NOTIFICATION))) OnRedeemingTx", transaction, details)
// wg.forceUpdateChain()
// if wg.processWalletBlockNotification() {
// }
// },
// OnRelevantTxAccepted: func(transaction []byte) {
// if wg.cx.Syncing.Load() {
// return
// }
// D.Ln("(((NOTIFICATION))) OnRelevantTxAccepted", transaction)
// wg.forceUpdateChain()
// if wg.processWalletBlockNotification() {
// }
// },
// OnRescanFinished: func(hash *chainhash.Hash, height int32, blkTime time.Time) {
// if wg.cx.Syncing.Load() {
// return
// }
// D.Ln("(((NOTIFICATION))) OnRescanFinished", hash, height, blkTime)
// wg.processChainBlockNotification(hash, height, blkTime)
// // update best block height
// // wg.processWalletBlockNotification()
// // stop showing syncing indicator
// if wg.processWalletBlockNotification() {
// }
// wg.cx.Syncing.Store(false)
// wg.invalidate <- struct{}{}
// },
// OnRescanProgress: func(hash *chainhash.Hash, height int32, blkTime time.Time) {
// D.Ln("(((NOTIFICATION))) OnRescanProgress", hash, height, blkTime)
// // // update best block height
// // // wg.processWalletBlockNotification()
// // // set to show syncing indicator
// // if wg.processWalletBlockNotification() {
// // }
// // wg.Syncing.Store(true)
// wg.invalidate <- struct{}{}
// },
// OnTxAccepted: func(hash *chainhash.Hash, amount util.Amount) {
// if wg.cx.Syncing.Load() {
// return
// }
// D.Ln("(((NOTIFICATION))) OnTxAccepted")
// D.Ln(hash, amount)
// if wg.processWalletBlockNotification() {
// }
// },
// OnTxAcceptedVerbose: func(txDetails *btcjson.TxRawResult) {
// if wg.cx.Syncing.Load() {
// return
// }
// D.Ln("(((NOTIFICATION))) OnTxAcceptedVerbose")
// D.S(txDetails)
// if wg.processWalletBlockNotification() {
// }
// },
// OnPodConnected: func(connected bool) {
// D.Ln("(((NOTIFICATION))) OnPodConnected", connected)
// wg.forceUpdateChain()
// if wg.processWalletBlockNotification() {
// }
// },
// OnAccountBalance: func(account string, balance util.Amount, confirmed bool) {
// D.Ln("OnAccountBalance")
// // what does this actually do
// D.Ln(account, balance, confirmed)
// },
// OnWalletLockState: func(locked bool) {
// D.Ln("OnWalletLockState", locked)
// // switch interface to unlock page
// wg.forceUpdateChain()
// if wg.processWalletBlockNotification() {
// }
// // TODO: lock when idle... how to get trigger for idleness in UI?
// },
// OnUnknownNotification: func(method string, params []json.RawMessage) {
// D.Ln("(((NOTIFICATION))) OnUnknownNotification", method, params)
// wg.forceUpdateChain()
// if wg.processWalletBlockNotification() {
// }
// },
}
}
func (wg *WalletGUI) chainClient() (e error) {
D.Ln("starting up chain client")
if wg.cx.Config.NodeOff.True() {
W.Ln("node is disabled")
return nil
}
if wg.ChainClient == nil { // || wg.ChainClient.Disconnected() {
D.Ln(wg.cx.Config.RPCConnect.V())
// wg.ChainMutex.Lock()
// defer wg.ChainMutex.Unlock()
// I.S(wg.certs)
if wg.ChainClient, e = rpcclient.New(
&rpcclient.ConnConfig{
Host: wg.cx.Config.RPCConnect.V(),
Endpoint: "ws",
User: wg.cx.Config.Username.V(),
Pass: wg.cx.Config.Password.V(),
TLS: wg.cx.Config.ClientTLS.True(),
Certificates: wg.certs,
DisableAutoReconnect: false,
DisableConnectOnNew: false,
}, wg.ChainNotifications(), wg.cx.KillAll,
); E.Chk(e) {
return
}
}
if wg.ChainClient.Disconnected() {
D.Ln("connecting chain client")
if e = wg.ChainClient.Connect(1); E.Chk(e) {
return
}
}
if e = wg.ChainClient.NotifyBlocks(); !E.Chk(e) {
D.Ln("subscribed to new blocks")
// wg.WalletNotifications()
wg.invalidate <- struct{}{}
}
return
}
func (wg *WalletGUI) walletClient() (e error) {
D.Ln("connecting to wallet")
if wg.cx.Config.WalletOff.True() {
W.Ln("wallet is disabled")
return nil
}
// walletRPC := (*wg.cx.Config.WalletRPCListeners)[0]
// certs := wg.cx.Config.ReadCAFile()
// I.Ln("config.tls", wg.cx.Config.ClientTLS.True())
wg.WalletMutex.Lock()
if wg.WalletClient, e = rpcclient.New(
&rpcclient.ConnConfig{
Host: wg.cx.Config.WalletServer.V(),
Endpoint: "ws",
User: wg.cx.Config.Username.V(),
Pass: wg.cx.Config.Password.V(),
TLS: wg.cx.Config.ClientTLS.True(),
Certificates: wg.certs,
DisableAutoReconnect: false,
DisableConnectOnNew: false,
}, wg.WalletNotifications(), wg.cx.KillAll,
); E.Chk(e) {
wg.WalletMutex.Unlock()
return
}
wg.WalletMutex.Unlock()
// if e = wg.WalletClient.Connect(1); E.Chk(e) {
// return
// }
if e = wg.WalletClient.NotifyNewTransactions(true); !E.Chk(e) {
D.Ln("subscribed to new transactions")
} else {
// return
}
// if e = wg.WalletClient.NotifyBlocks(); E.Chk(e) {
// // return
// } else {
// D.Ln("subscribed to wallet client notify blocks")
// }
D.Ln("wallet connected")
return
}

121
cmd/gui/help.go Normal file
View File

@@ -0,0 +1,121 @@
package gui
import (
l "github.com/p9c/p9/pkg/gel/gio/layout"
"github.com/p9c/p9/pkg/gel/gio/text"
"github.com/p9c/p9/pkg/gel"
"github.com/p9c/p9/pkg/p9icons"
"github.com/p9c/p9/version"
)
func (wg *WalletGUI) HelpPage() func(gtx l.Context) l.Dimensions {
return func(gtx l.Context) l.Dimensions {
return wg.VFlex().AlignMiddle().
Flexed(0.5, gel.EmptyMaxWidth()).
Rigid(
wg.H5("ParallelCoin Pod Gio Wallet").Alignment(text.Middle).Fn,
).
Rigid(
wg.Fill(
"DocBg", l.Center, wg.TextSize.V, 0, wg.Inset(
0.5,
wg.VFlex().
AlignMiddle().
Rigid(
wg.VFlex().AlignMiddle().
Rigid(
wg.Inset(
0.25,
wg.Caption("Built from git repository:").
Font("bariol bold").Fn,
).Fn,
).
Rigid(
wg.Caption(version.URL).Fn,
).
Fn,
).
Rigid(
wg.VFlex().AlignMiddle().
Rigid(
wg.Inset(
0.25,
wg.Caption("GitRef:").
Font("bariol bold").Fn,
).Fn,
).
Rigid(
wg.Caption(version.GitRef).Fn,
).
Fn,
).
Rigid(
wg.VFlex().AlignMiddle().
Rigid(
wg.Inset(
0.25,
wg.Caption("GitCommit:").
Font("bariol bold").Fn,
).Fn,
).
Rigid(
wg.Caption(version.GitCommit).Fn,
).
Fn,
).
Rigid(
wg.VFlex().AlignMiddle().
Rigid(
wg.Inset(
0.25,
wg.Caption("BuildTime:").
Font("bariol bold").Fn,
).Fn,
).
Rigid(
wg.Caption(version.BuildTime).Fn,
).
Fn,
).
Rigid(
wg.VFlex().AlignMiddle().
Rigid(
wg.Inset(
0.25,
wg.Caption("Tag:").
Font("bariol bold").Fn,
).Fn,
).
Rigid(
wg.Caption(version.Tag).Fn,
).
Fn,
).
Rigid(
wg.Icon().Scale(gel.Scales["H6"]).
Color("DocText").
Src(&p9icons.Gio).
Fn,
).
Rigid(
wg.Caption("powered by Gio").Fn,
).
Fn,
).Fn,
).Fn,
).
Flexed(0.5, gel.EmptyMaxWidth()).
Fn(gtx)
}
}

199
cmd/gui/history.go Normal file
View File

@@ -0,0 +1,199 @@
package gui
import (
"fmt"
"time"
l "github.com/p9c/p9/pkg/gel/gio/layout"
"github.com/p9c/p9/pkg/gel"
)
func (wg *WalletGUI) HistoryPage() l.Widget {
if wg.TxHistoryWidget == nil {
wg.TxHistoryWidget = func(gtx l.Context) l.Dimensions {
return l.Dimensions{Size: gtx.Constraints.Max}
}
}
return func(gtx l.Context) l.Dimensions {
if wg.openTxID.Load() != "" {
for i := range wg.txHistoryList {
if wg.txHistoryList[i].TxID == wg.openTxID.Load() {
txs := wg.txHistoryList[i]
// instead return detail view
var out []l.Widget
out = []l.Widget{
wg.txDetailEntry("Abandoned", fmt.Sprint(txs.Abandoned), "DocBg", false),
wg.txDetailEntry("Account", fmt.Sprint(txs.Account), "DocBgDim", false),
wg.txDetailEntry("Address", txs.Address, "DocBg", false),
wg.txDetailEntry("Block Hash", txs.BlockHash, "DocBgDim", true),
wg.txDetailEntry("Block Index", fmt.Sprint(txs.BlockIndex), "DocBg", false),
wg.txDetailEntry("Block Time", fmt.Sprint(time.Unix(txs.BlockTime, 0)), "DocBgDim", false),
wg.txDetailEntry("Category", txs.Category, "DocBg", false),
wg.txDetailEntry("Confirmations", fmt.Sprint(txs.Confirmations), "DocBgDim", false),
wg.txDetailEntry("Fee", fmt.Sprintf("%0.8f", txs.Fee), "DocBg", false),
wg.txDetailEntry("Generated", fmt.Sprint(txs.Generated), "DocBgDim", false),
wg.txDetailEntry("Involves Watch Only", fmt.Sprint(txs.InvolvesWatchOnly), "DocBg", false),
wg.txDetailEntry("Time", fmt.Sprint(time.Unix(txs.Time, 0)), "DocBgDim", false),
wg.txDetailEntry("Time Received", fmt.Sprint(time.Unix(txs.TimeReceived, 0)), "DocBg", false),
wg.txDetailEntry("Trusted", fmt.Sprint(txs.Trusted), "DocBgDim", false),
wg.txDetailEntry("TxID", txs.TxID, "DocBg", true),
// todo: add WalletConflicts here
wg.txDetailEntry("Comment", fmt.Sprintf("%0.8f", txs.Amount), "DocBgDim", false),
wg.txDetailEntry("OtherAccount", fmt.Sprint(txs.OtherAccount), "DocBg", false),
}
le := func(gtx l.Context, index int) l.Dimensions {
return out[index](gtx)
}
return wg.VFlex().AlignStart().
Rigid(
wg.recentTxCardSummaryButton(&txs, wg.clickables["txPageBack"], "Primary", true),
// wg.H6(wg.openTxID.Load()).Fn,
).
Rigid(
wg.lists["txdetail"].
Vertical().
Length(len(out)).
ListElement(le).
Fn,
).
Fn(gtx)
// return wg.Flex().Flexed(
// 1,
// wg.H3(wg.openTxID.Load()).Fn,
// ).Fn(gtx)
}
}
// if we got to here, the tx was not found
if wg.originTxDetail != "" {
wg.MainApp.ActivePage(wg.originTxDetail)
wg.originTxDetail = ""
}
}
return wg.VFlex().
Rigid(
// wg.Fill("DocBg", l.Center, 0, 0,
// wg.Inset(0.25,
wg.Responsive(
wg.Size.Load(), gel.Widgets{
{
Widget: wg.VFlex().
Flexed(1, wg.HistoryPageView()).
// Rigid(
// // wg.Fill("DocBg",
// wg.Flex().AlignMiddle().SpaceBetween().
// Flexed(0.5, gel.EmptyMaxWidth()).
// Rigid(wg.HistoryPageStatusFilter()).
// Flexed(0.5, gel.EmptyMaxWidth()).
// Fn,
// // ).Fn,
// ).
// Rigid(
// wg.Fill("DocBg",
// wg.Flex().AlignMiddle().SpaceBetween().
// Rigid(wg.HistoryPager()).
// Rigid(wg.HistoryPagePerPageCount()).
// Fn,
// ).Fn,
// ).
Fn,
},
{
Size: 64,
Widget: wg.VFlex().
Flexed(1, wg.HistoryPageView()).
// Rigid(
// // wg.Fill("DocBg",
// wg.Flex().AlignMiddle().SpaceBetween().
// // Rigid(wg.HistoryPager()).
// Flexed(0.5, gel.EmptyMaxWidth()).
// Rigid(wg.HistoryPageStatusFilter()).
// Flexed(0.5, gel.EmptyMaxWidth()).
// // Rigid(wg.HistoryPagePerPageCount()).
// Fn,
// // ).Fn,
// ).
Fn,
},
},
).Fn,
// ).Fn,
// ).Fn,
).Fn(gtx)
}
}
func (wg *WalletGUI) HistoryPageView() l.Widget {
return wg.VFlex().
Rigid(
// wg.Fill("DocBg", l.Center, wg.TextSize.True, 0,
// wg.Inset(0.25,
wg.TxHistoryWidget,
// ).Fn,
// ).Fn,
).Fn
}
func (wg *WalletGUI) HistoryPageStatusFilter() l.Widget {
return wg.Flex().AlignMiddle().
Rigid(
wg.Inset(
0.25,
wg.Caption("show").Fn,
).Fn,
).
Rigid(
wg.Inset(
0.25,
func(gtx l.Context) l.Dimensions {
return wg.CheckBox(wg.bools["showGenerate"]).
TextColor("DocText").
TextScale(1).
Text("generate").
IconScale(1).
Fn(gtx)
},
).Fn,
).
Rigid(
wg.Inset(
0.25,
func(gtx l.Context) l.Dimensions {
return wg.CheckBox(wg.bools["showSent"]).
TextColor("DocText").
TextScale(1).
Text("sent").
IconScale(1).
Fn(gtx)
},
).Fn,
).
Rigid(
wg.Inset(
0.25,
func(gtx l.Context) l.Dimensions {
return wg.CheckBox(wg.bools["showReceived"]).
TextColor("DocText").
TextScale(1).
Text("received").
IconScale(1).
Fn(gtx)
},
).Fn,
).
Rigid(
wg.Inset(
0.25,
func(gtx l.Context) l.Dimensions {
return wg.CheckBox(wg.bools["showImmature"]).
TextColor("DocText").
TextScale(1).
Text("immature").
IconScale(1).
Fn(gtx)
},
).Fn,
).
Fn
}

78
cmd/gui/loadingscreen.go Normal file
View File

@@ -0,0 +1,78 @@
package gui
import (
"golang.org/x/exp/shiny/materialdesign/icons"
l "github.com/p9c/p9/pkg/gel/gio/layout"
"github.com/p9c/p9/pkg/gel"
"github.com/p9c/p9/pkg/p9icons"
)
func (wg *WalletGUI) getLoadingPage() (a *gel.App) {
a = wg.App(wg.Window.Width, wg.State.activePage, Break1).
SetMainDirection(l.Center + 1).
SetLogo(&p9icons.ParallelCoin).
SetAppTitleText("Parallelcoin Wallet")
a.Pages(
map[string]l.Widget{
"loading": wg.Page(
"loading", gel.Widgets{
gel.WidgetSize{
Widget:
func(gtx l.Context) l.Dimensions {
return a.Flex().Flexed(1, a.Direction().Center().Embed(a.H1("loading").Fn).Fn).Fn(gtx)
},
},
},
),
"unlocking": wg.Page(
"unlocking", gel.Widgets{
gel.WidgetSize{
Widget:
func(gtx l.Context) l.Dimensions {
return a.Flex().Flexed(1, a.Direction().Center().Embed(a.H1("unlocking").Fn).Fn).Fn(gtx)
},
},
},
),
},
)
a.ButtonBar(
[]l.Widget{
wg.PageTopBarButton(
"home", 4, &icons.ActionLock, func(name string) {
wg.unlockPage.ActivePage(name)
}, wg.unlockPage, "Danger",
),
// wg.Flex().Rigid(wg.Inset(0.5, gel.EmptySpace(0, 0)).Fn).Fn,
},
)
a.StatusBar(
[]l.Widget{
wg.RunStatusPanel,
},
[]l.Widget{
wg.StatusBarButton(
"console", 2, &p9icons.Terminal, func(name string) {
wg.MainApp.ActivePage(name)
}, a,
),
wg.StatusBarButton(
"log", 4, &icons.ActionList, func(name string) {
D.Ln("click on button", name)
wg.unlockPage.ActivePage(name)
}, wg.unlockPage,
),
wg.StatusBarButton(
"settings", 5, &icons.ActionSettings, func(name string) {
wg.unlockPage.ActivePage(name)
}, wg.unlockPage,
),
// wg.Inset(0.5, gel.EmptySpace(0, 0)).Fn,
},
)
// a.PushOverlay(wg.toasts.DrawToasts())
// a.PushOverlay(wg.dialog.DrawDialog())
return
}

43
cmd/gui/log.go Normal file
View File

@@ -0,0 +1,43 @@
package gui
import (
"github.com/p9c/p9/pkg/log"
"github.com/p9c/p9/version"
)
var subsystem = log.AddLoggerSubsystem(version.PathBase)
var F, E, W, I, D, T log.LevelPrinter = log.GetLogPrinterSet(subsystem)
func init() {
// to filter out this package, uncomment the following
// var _ = logg.AddFilteredSubsystem(subsystem)
// to highlight this package, uncomment the following
// var _ = logg.AddHighlightedSubsystem(subsystem)
// these are here to test whether they are working
// F.Ln("F.Ln")
// E.Ln("E.Ln")
// W.Ln("W.Ln")
// I.Ln("I.Ln")
// D.Ln("D.Ln")
// F.Ln("T.Ln")
// F.F("%s", "F.F")
// E.F("%s", "E.F")
// W.F("%s", "W.F")
// I.F("%s", "I.F")
// D.F("%s", "D.F")
// T.F("%s", "T.F")
// F.C(func() string { return "F.C" })
// E.C(func() string { return "E.C" })
// W.C(func() string { return "W.C" })
// I.C(func() string { return "I.C" })
// D.C(func() string { return "D.C" })
// T.C(func() string { return "T.C" })
// F.C(func() string { return "F.C" })
// E.Chk(errors.New("E.Chk"))
// W.Chk(errors.New("W.Chk"))
// I.Chk(errors.New("I.Chk"))
// D.Chk(errors.New("D.Chk"))
// T.Chk(errors.New("T.Chk"))
}

722
cmd/gui/main.go Normal file
View File

@@ -0,0 +1,722 @@
package gui
import (
"crypto/rand"
"fmt"
"net"
"os"
"runtime"
"strings"
"sync"
"time"
"github.com/niubaoshu/gotiny"
"github.com/tyler-smith/go-bip39"
"github.com/p9c/p9/pkg/log"
"github.com/p9c/p9/pkg/opts/meta"
"github.com/p9c/p9/pkg/opts/text"
"github.com/p9c/p9/pkg/chainrpc/p2padvt"
"github.com/p9c/p9/pkg/pipe"
"github.com/p9c/p9/pkg/transport"
"github.com/p9c/p9/pod/state"
uberatomic "go.uber.org/atomic"
"github.com/p9c/p9/pkg/gel/gio/op/paint"
"github.com/p9c/p9/pkg/qu"
"github.com/p9c/p9/pkg/interrupt"
"github.com/p9c/p9/pkg/gel"
"github.com/p9c/p9/pkg/btcjson"
l "github.com/p9c/p9/pkg/gel/gio/layout"
"github.com/p9c/p9/cmd/gui/cfg"
"github.com/p9c/p9/pkg/apputil"
"github.com/p9c/p9/pkg/rpcclient"
"github.com/p9c/p9/pkg/util/rununit"
)
// Main is the entrypoint for the wallet GUI
func Main(cx *state.State) (e error) {
size := uberatomic.NewInt32(0)
noWallet := true
wg := &WalletGUI{
cx: cx,
invalidate: qu.Ts(16),
quit: cx.KillAll,
Size: size,
noWallet: &noWallet,
otherNodes: make(map[uint64]*nodeSpec),
certs: cx.Config.ReadCAFile(),
}
return wg.Run()
}
type BoolMap map[string]*gel.Bool
type ListMap map[string]*gel.List
type CheckableMap map[string]*gel.Checkable
type ClickableMap map[string]*gel.Clickable
type Inputs map[string]*gel.Input
type Passwords map[string]*gel.Password
type IncDecMap map[string]*gel.IncDec
type WalletGUI struct {
wg sync.WaitGroup
cx *state.State
quit qu.C
State *State
noWallet *bool
node, wallet, miner *rununit.RunUnit
walletToLock time.Time
walletLockTime int
ChainMutex, WalletMutex sync.Mutex
ChainClient, WalletClient *rpcclient.Client
WalletWatcher qu.C
*gel.Window
Size *uberatomic.Int32
MainApp *gel.App
invalidate qu.C
unlockPage *gel.App
loadingPage *gel.App
config *cfg.Config
configs cfg.GroupsMap
unlockPassword *gel.Password
sidebarButtons []*gel.Clickable
buttonBarButtons []*gel.Clickable
statusBarButtons []*gel.Clickable
receiveAddressbookClickables []*gel.Clickable
sendAddressbookClickables []*gel.Clickable
quitClickable *gel.Clickable
bools BoolMap
lists ListMap
checkables CheckableMap
clickables ClickableMap
inputs Inputs
passwords Passwords
incdecs IncDecMap
console *Console
RecentTxsWidget, TxHistoryWidget l.Widget
recentTxsClickables, txHistoryClickables []*gel.Clickable
txHistoryList []btcjson.ListTransactionsResult
openTxID, prevOpenTxID *uberatomic.String
originTxDetail string
txMx sync.Mutex
stateLoaded *uberatomic.Bool
currentReceiveQRCode *paint.ImageOp
currentReceiveAddress string
currentReceiveQR l.Widget
currentReceiveRegenClickable *gel.Clickable
currentReceiveCopyClickable *gel.Clickable
currentReceiveRegenerate *uberatomic.Bool
// currentReceiveGetNew *uberatomic.Bool
sendClickable *gel.Clickable
ready *uberatomic.Bool
mainDirection l.Direction
preRendering bool
// ReceiveAddressbook l.Widget
// SendAddressbook l.Widget
ReceivePage *ReceivePage
SendPage *SendPage
// toasts *toast.Toasts
// dialog *dialog.Dialog
createSeed []byte
createWords, showWords, createMatch string
createVerifying bool
restoring bool
lastUpdated uberatomic.Int64
multiConn *transport.Channel
otherNodes map[uint64]*nodeSpec
uuid uint64
peerCount *uberatomic.Int32
certs []byte
}
type nodeSpec struct {
time.Time
addr string
}
func (wg *WalletGUI) Run() (e error) {
wg.openTxID = uberatomic.NewString("")
var mc *transport.Channel
quit := qu.T()
// I.Ln(wg.cx.Config.MulticastPass.V(), string(wg.cx.Config.MulticastPass.
// Bytes()))
if mc, e = transport.NewBroadcastChannel(
"controller",
wg,
wg.cx.Config.MulticastPass.Bytes(),
transport.DefaultPort,
16384,
handlersMulticast,
quit,
); E.Chk(e) {
return
}
wg.multiConn = mc
wg.peerCount = uberatomic.NewInt32(0)
wg.prevOpenTxID = uberatomic.NewString("")
wg.stateLoaded = uberatomic.NewBool(false)
wg.currentReceiveRegenerate = uberatomic.NewBool(true)
wg.ready = uberatomic.NewBool(false)
wg.Window = gel.NewWindowP9(wg.quit)
wg.Dark = wg.cx.Config.DarkTheme
wg.Colors.SetDarkTheme(wg.Dark.True())
*wg.noWallet = true
wg.GetButtons()
wg.lists = wg.GetLists()
wg.clickables = wg.GetClickables()
wg.checkables = wg.GetCheckables()
before := func() { D.Ln("running before") }
after := func() { D.Ln("running after") }
I.Ln(os.Args[1:])
options := []string{os.Args[0]}
// options = append(options, wg.cx.Config.FoundArgs...)
// options = append(options, "pipelog")
wg.node = wg.GetRunUnit(
"NODE", before, after,
append(options, "node")...,
// "node",
)
wg.wallet = wg.GetRunUnit(
"WLLT", before, after,
append(options, "wallet")...,
// "wallet",
)
wg.miner = wg.GetRunUnit(
"MINE", before, after,
append(options, "kopach")...,
// "wallet",
)
// I.S(wg.node, wg.wallet, wg.miner)
wg.bools = wg.GetBools()
wg.inputs = wg.GetInputs()
wg.passwords = wg.GetPasswords()
// wg.toasts = toast.New(wg.th)
// wg.dialog = dialog.New(wg.th)
wg.console = wg.ConsolePage()
wg.quitClickable = wg.Clickable()
wg.incdecs = wg.GetIncDecs()
wg.Size = wg.Window.Width
wg.currentReceiveCopyClickable = wg.WidgetPool.GetClickable()
wg.currentReceiveRegenClickable = wg.WidgetPool.GetClickable()
wg.currentReceiveQR = func(gtx l.Context) l.Dimensions {
return l.Dimensions{}
}
wg.ReceivePage = wg.GetReceivePage()
wg.SendPage = wg.GetSendPage()
wg.MainApp = wg.GetAppWidget()
wg.State = GetNewState(wg.cx.ActiveNet, wg.MainApp.ActivePageGetAtomic())
wg.unlockPage = wg.getWalletUnlockAppWidget()
wg.loadingPage = wg.getLoadingPage()
if !apputil.FileExists(wg.cx.Config.WalletFile.V()) {
I.Ln("wallet file does not exist", wg.cx.Config.WalletFile.V())
} else {
*wg.noWallet = false
// if !*wg.cx.Config.NodeOff {
// // wg.startNode()
// wg.node.Start()
// }
if wg.cx.Config.Generate.True() && wg.cx.Config.GenThreads.V() != 0 {
// wg.startMiner()
wg.miner.Start()
}
wg.unlockPassword.Focus()
}
interrupt.AddHandler(
func() {
D.Ln("quitting wallet gui")
// consume.Kill(wg.Node)
// consume.Kill(wg.Miner)
// wg.gracefulShutdown()
wg.quit.Q()
},
)
go func() {
ticker := time.NewTicker(time.Second)
out:
for {
select {
case <-ticker.C:
go func() {
if e = wg.Advertise(); E.Chk(e) {
}
if wg.node.Running() {
if wg.ChainClient != nil {
if !wg.ChainClient.Disconnected() {
var pi []btcjson.GetPeerInfoResult
if pi, e = wg.ChainClient.GetPeerInfo(); E.Chk(e) {
return
}
wg.peerCount.Store(int32(len(pi)))
wg.Invalidate()
}
}
}
}()
case <-wg.invalidate.Wait():
T.Ln("invalidating render queue")
wg.Window.Window.Invalidate()
// TODO: make a more appropriate trigger for this - ie, when state actually changes.
// if wg.wallet.Running() && wg.stateLoaded.Load() {
// filename := filepath.Join(wg.cx.DataDir, "state.json")
// if e := wg.State.Save(filename, wg.cx.Config.WalletPass); E.Chk(e) {
// }
// }
case <-wg.cx.KillAll.Wait():
break out
case <-wg.quit.Wait():
break out
}
}
}()
if e := wg.Window.
Size(56, 32).
Title("ParallelCoin Wallet").
Open().
Run(
func(gtx l.Context) l.Dimensions {
return wg.Fill(
"DocBg", l.Center, 0, 0, func(gtx l.Context) l.Dimensions {
return gel.If(
*wg.noWallet,
wg.CreateWalletPage,
func(gtx l.Context) l.Dimensions {
switch {
case wg.stateLoaded.Load():
return wg.MainApp.Fn()(gtx)
default:
return wg.unlockPage.Fn()(gtx)
}
},
)(gtx)
},
).Fn(gtx)
},
wg.quit.Q,
wg.quit,
); E.Chk(e) {
}
wg.gracefulShutdown()
wg.quit.Q()
return
}
func (wg *WalletGUI) GetButtons() {
wg.sidebarButtons = make([]*gel.Clickable, 12)
// wg.walletLocked.Store(true)
for i := range wg.sidebarButtons {
wg.sidebarButtons[i] = wg.Clickable()
}
wg.buttonBarButtons = make([]*gel.Clickable, 5)
for i := range wg.buttonBarButtons {
wg.buttonBarButtons[i] = wg.Clickable()
}
wg.statusBarButtons = make([]*gel.Clickable, 8)
for i := range wg.statusBarButtons {
wg.statusBarButtons[i] = wg.Clickable()
}
}
func (wg *WalletGUI) ShuffleSeed() {
wg.createSeed = make([]byte, 32)
_, _ = rand.Read(wg.createSeed)
var e error
var wk string
if wk, e = bip39.NewMnemonic(wg.createSeed); E.Chk(e) {
panic(e)
}
wg.createWords = wk
// wg.createMatch = wk
wks := strings.Split(wk, " ")
var out string
for i := 0; i < 24; i += 4 {
out += strings.Join(wks[i:i+4], " ")
if i+4 < 24 {
out += "\n"
}
}
wg.showWords = out
}
func (wg *WalletGUI) GetInputs() Inputs {
wg.ShuffleSeed()
return Inputs{
"receiveAmount": wg.Input("", "Amount", "DocText", "PanelBg", "DocBg", func(amt string) {}, func(string) {}),
"receiveMessage": wg.Input(
"",
"Title",
"DocText",
"PanelBg",
"DocBg",
func(pass string) {},
func(string) {},
),
"sendAddress": wg.Input(
"",
"Parallelcoin Address",
"DocText",
"PanelBg",
"DocBg",
func(amt string) {},
func(string) {},
),
"sendAmount": wg.Input("", "Amount", "DocText", "PanelBg", "DocBg", func(amt string) {}, func(string) {}),
"sendMessage": wg.Input(
"",
"Title",
"DocText",
"PanelBg",
"DocBg",
func(pass string) {},
func(string) {},
),
"console": wg.Input(
"",
"enter rpc command",
"DocText",
"Transparent",
"PanelBg",
func(pass string) {},
func(string) {},
),
"walletWords": wg.Input(
/*wg.createWords*/ "", "wallet word seed", "DocText", "DocBg", "PanelBg", func(string) {},
func(seedWords string) {
wg.createMatch = seedWords
wg.Invalidate()
},
),
"walletRestore": wg.Input(
/*wg.createWords*/ "", "enter seed to restore", "DocText", "DocBg", "PanelBg", func(string) {},
func(seedWords string) {
var e error
wg.createMatch = seedWords
if wg.createSeed, e = bip39.EntropyFromMnemonic(seedWords); E.Chk(e) {
return
}
wg.createWords = seedWords
wg.Invalidate()
},
),
// "walletSeed": wg.Input(
// seedString, "wallet seed", "DocText", "DocBg", "PanelBg", func(seedHex string) {
// var e error
// if wg.createSeed, e = hex.DecodeString(seedHex); E.Chk(e) {
// return
// }
// var wk string
// if wk, e = bip39.NewMnemonic(wg.createSeed); E.Chk(e) {
// panic(e)
// }
// wg.createWords=wk
// wks := strings.Split(wk, " ")
// var out string
// for i := 0; i < 24; i += 4 {
// out += strings.Join(wks[i:i+4], " ") + "\n"
// }
// wg.showWords = out
// }, nil,
// ),
}
}
// GetPasswords returns the passwords used in the wallet GUI
func (wg *WalletGUI) GetPasswords() (passwords Passwords) {
passwords = Passwords{
"passEditor": wg.Password(
"password (minimum 8 characters length)",
text.New(meta.Data{}, ""),
"DocText",
"DocBg",
"PanelBg",
func(pass string) {},
),
"confirmPassEditor": wg.Password(
"confirm",
text.New(meta.Data{}, ""),
"DocText",
"DocBg",
"PanelBg",
func(pass string) {},
),
"publicPassEditor": wg.Password(
"public password (optional)",
wg.cx.Config.WalletPass,
"Primary",
"DocText",
"PanelBg",
func(pass string) {},
),
}
return
}
func (wg *WalletGUI) GetIncDecs() IncDecMap {
return IncDecMap{
"generatethreads": wg.IncDec().
NDigits(2).
Min(0).
Max(runtime.NumCPU()).
SetCurrent(wg.cx.Config.GenThreads.V()).
ChangeHook(
func(n int) {
D.Ln("threads value now", n)
go func() {
D.Ln("setting thread count")
if wg.miner.Running() && n != 0 {
wg.miner.Stop()
wg.miner.Start()
}
if n == 0 {
wg.miner.Stop()
}
wg.cx.Config.GenThreads.Set(n)
_ = wg.cx.Config.WriteToFile(wg.cx.Config.ConfigFile.V())
// if wg.miner.Running() {
// D.Ln("restarting miner")
// wg.miner.Stop()
// wg.miner.Start()
// }
}()
},
),
"idleTimeout": wg.IncDec().
Scale(4).
Min(60).
Max(3600).
NDigits(4).
Amount(60).
SetCurrent(300).
ChangeHook(
func(n int) {
D.Ln("idle timeout", time.Duration(n)*time.Second)
},
),
}
}
func (wg *WalletGUI) GetRunUnit(
name string, before, after func(), args ...string,
) *rununit.RunUnit {
I.Ln("getting rununit for", name, args)
// we have to copy the args otherwise further mutations affect this one
argsCopy := make([]string, len(args))
copy(argsCopy, args)
return rununit.New(name, before, after, pipe.SimpleLog(name),
pipe.FilterNone, wg.quit, argsCopy...)
}
func (wg *WalletGUI) GetLists() (o ListMap) {
return ListMap{
"createWallet": wg.List(),
"overview": wg.List(),
"balances": wg.List(),
"recent": wg.List(),
"send": wg.List(),
"sendMedium": wg.List(),
"sendAddresses": wg.List(),
"receive": wg.List(),
"receiveMedium": wg.List(),
"receiveAddresses": wg.List(),
"transactions": wg.List(),
"settings": wg.List(),
"received": wg.List(),
"history": wg.List(),
"txdetail": wg.List(),
}
}
func (wg *WalletGUI) GetClickables() ClickableMap {
return ClickableMap{
"balanceConfirmed": wg.Clickable(),
"balanceUnconfirmed": wg.Clickable(),
"balanceTotal": wg.Clickable(),
"createWallet": wg.Clickable(),
"createVerify": wg.Clickable(),
"createShuffle": wg.Clickable(),
"createRestore": wg.Clickable(),
"genesis": wg.Clickable(),
"autofill": wg.Clickable(),
"quit": wg.Clickable(),
"sendSend": wg.Clickable(),
"sendSave": wg.Clickable(),
"sendFromRequest": wg.Clickable(),
"receiveCreateNewAddress": wg.Clickable(),
"receiveClear": wg.Clickable(),
"receiveShow": wg.Clickable(),
"receiveRemove": wg.Clickable(),
"transactions10": wg.Clickable(),
"transactions30": wg.Clickable(),
"transactions50": wg.Clickable(),
"txPageForward": wg.Clickable(),
"txPageBack": wg.Clickable(),
"theme": wg.Clickable(),
}
}
func (wg *WalletGUI) GetCheckables() CheckableMap {
return CheckableMap{}
}
func (wg *WalletGUI) GetBools() BoolMap {
return BoolMap{
"runstate": wg.Bool(wg.node.Running()),
"encryption": wg.Bool(false),
"seed": wg.Bool(false),
"testnet": wg.Bool(false),
"lan": wg.Bool(false),
"solo": wg.Bool(false),
"ihaveread": wg.Bool(false),
"showGenerate": wg.Bool(true),
"showSent": wg.Bool(true),
"showReceived": wg.Bool(true),
"showImmature": wg.Bool(true),
}
}
var shuttingDown = false
func (wg *WalletGUI) gracefulShutdown() {
if shuttingDown {
D.Ln(log.Caller("already called gracefulShutdown", 1))
return
} else {
shuttingDown = true
}
D.Ln("\nquitting wallet gui\n")
if wg.miner.Running() {
D.Ln("stopping miner")
wg.miner.Stop()
wg.miner.Shutdown()
}
if wg.wallet.Running() {
D.Ln("stopping wallet")
wg.wallet.Stop()
wg.wallet.Shutdown()
wg.unlockPassword.Wipe()
// wg.walletLocked.Store(true)
}
if wg.node.Running() {
D.Ln("stopping node")
wg.node.Stop()
wg.node.Shutdown()
}
// wg.ChainMutex.Lock()
if wg.ChainClient != nil {
D.Ln("stopping chain client")
wg.ChainClient.Shutdown()
wg.ChainClient = nil
}
// wg.ChainMutex.Unlock()
// wg.WalletMutex.Lock()
if wg.WalletClient != nil {
D.Ln("stopping wallet client")
wg.WalletClient.Shutdown()
wg.WalletClient = nil
}
// wg.WalletMutex.Unlock()
// interrupt.Request()
// time.Sleep(time.Second)
wg.quit.Q()
}
var handlersMulticast = transport.Handlers{
// string(sol.Magic): processSolMsg,
string(p2padvt.Magic): processAdvtMsg,
// string(hashrate.Magic): processHashrateMsg,
}
func processAdvtMsg(
ctx interface{}, src net.Addr, dst string, b []byte,
) (e error) {
wg := ctx.(*WalletGUI)
if wg.cx.Config.Discovery.False() {
return
}
if wg.ChainClient == nil {
T.Ln("no chain client to process advertisment")
return
}
var j p2padvt.Advertisment
gotiny.Unmarshal(b, &j)
// I.S(j)
var peerUUID uint64
peerUUID = j.UUID
// I.Ln("peerUUID of advertisment", peerUUID, wg.otherNodes)
if int(peerUUID) == wg.cx.Config.UUID.V() {
D.Ln("ignoring own advertisment message")
return
}
if _, ok := wg.otherNodes[peerUUID]; !ok {
var pi []btcjson.GetPeerInfoResult
if pi, e = wg.ChainClient.GetPeerInfo(); E.Chk(e) {
}
for i := range pi {
for k := range j.IPs {
jpa := net.JoinHostPort(k, fmt.Sprint(j.P2P))
if jpa == pi[i].Addr {
I.Ln("not connecting to node already connected outbound")
return
}
if jpa == pi[i].AddrLocal {
I.Ln("not connecting to node already connected inbound")
return
}
}
}
// if we haven't already added it to the permanent peer list, we can add it now
I.Ln("connecting to lan peer with same PSK", j.IPs, peerUUID)
wg.otherNodes[peerUUID] = &nodeSpec{}
wg.otherNodes[peerUUID].Time = time.Now()
for i := range j.IPs {
addy := net.JoinHostPort(i, fmt.Sprint(j.P2P))
for j := range pi {
if addy == pi[j].Addr || addy == pi[j].AddrLocal {
// not connecting to peer we already have connected to
return
}
}
}
// try all IPs
for addr := range j.IPs {
peerIP := net.JoinHostPort(addr, fmt.Sprint(j.P2P))
if e = wg.ChainClient.AddNode(peerIP, "add"); E.Chk(e) {
continue
}
D.Ln("connected to peer via address", peerIP)
wg.otherNodes[peerUUID].addr = peerIP
break
}
I.Ln(peerUUID, "added", "otherNodes", wg.otherNodes)
} else {
// update last seen time for peerUUID for garbage collection of stale disconnected
// nodes
D.Ln("other node seen again", peerUUID, wg.otherNodes[peerUUID].addr)
wg.otherNodes[peerUUID].Time = time.Now()
}
// I.S(wg.otherNodes)
// If we lose connection for more than 9 seconds we delete and if the node
// reappears it can be reconnected
for i := range wg.otherNodes {
D.Ln(i, wg.otherNodes[i])
tn := time.Now()
if tn.Sub(wg.otherNodes[i].Time) > time.Second*6 {
// also remove from connection manager
if e = wg.ChainClient.AddNode(wg.otherNodes[i].addr, "remove"); E.Chk(e) {
}
D.Ln("deleting", tn, wg.otherNodes[i], i)
delete(wg.otherNodes, i)
}
}
// on := int32(len(wg.otherNodes))
// wg.otherNodeCount.Store(on)
return
}

822
cmd/gui/overview.go Normal file
View File

@@ -0,0 +1,822 @@
package gui
import (
"fmt"
"strings"
"time"
icons2 "golang.org/x/exp/shiny/materialdesign/icons"
"github.com/p9c/p9/pkg/gel/gio/text"
l "github.com/p9c/p9/pkg/gel/gio/layout"
"github.com/p9c/p9/pkg/gel"
"github.com/p9c/p9/pkg/btcjson"
)
func (wg *WalletGUI) balanceCard() func(gtx l.Context) l.Dimensions {
// gtx.Constraints.Min.X = int(wg.TextSize.True * sp.inputWidth)
return func(gtx l.Context) l.Dimensions {
gtx.Constraints.Min.X =
int(wg.TextSize.V * 16)
gtx.Constraints.Max.X =
int(wg.TextSize.V * 16)
return wg.VFlex().
AlignStart().
Rigid(
wg.Inset(
0.25,
wg.H5("balances").
Fn,
).Fn,
).
Rigid(
wg.Fill(
"Primary", l.E, 0, 0,
wg.Inset(
0.25,
wg.Flex().
AlignEnd().
Flexed(
1,
wg.VFlex().AlignEnd().
Rigid(
wg.ButtonLayout(wg.clickables["balanceConfirmed"]).SetClick(
func() {
go wg.WriteClipboard(
fmt.Sprintf(
"%6.8f",
wg.State.balance.Load(),
),
)
},
).Background("Transparent").Embed(
wg.Flex().AlignEnd().Flexed(
1,
wg.Inset(
0.5,
wg.Caption(
"confirmed"+leftPadTo(
14, 14,
fmt.Sprintf(
"%6.8f",
wg.State.balance.Load(),
),
),
).
Font("go regular").
Alignment(text.End).
Color("DocText").Fn,
).Fn,
).Fn,
).Fn,
).
Rigid(
wg.ButtonLayout(wg.clickables["balanceUnconfirmed"]).SetClick(
func() {
go wg.WriteClipboard(
fmt.Sprintf(
"%6.8f",
wg.State.balanceUnconfirmed.Load(),
),
)
},
).Background("Transparent").Embed(
wg.Flex().AlignEnd().Flexed(
1,
wg.Inset(
0.5,
wg.Caption(
"unconfirmed"+leftPadTo(
14, 14,
fmt.Sprintf(
"%6.8f",
wg.State.balanceUnconfirmed.Load(),
),
),
).
Font("go regular").
Alignment(text.End).
Color("DocText").Fn,
).Fn,
).Fn,
).Fn,
).
Rigid(
wg.ButtonLayout(wg.clickables["balanceTotal"]).SetClick(
func() {
go wg.WriteClipboard(
fmt.Sprintf(
"%6.8f",
wg.State.balance.Load()+wg.State.balanceUnconfirmed.Load(),
),
)
},
).Background("Transparent").Embed(
wg.Flex().AlignEnd().Flexed(
1,
wg.Inset(
0.5,
wg.H5(
"total"+leftPadTo(
14, 14, fmt.Sprintf(
"%6.8f", wg.State.balance.Load()+wg.
State.balanceUnconfirmed.Load(),
),
),
).
Alignment(text.End).
Color("DocText").Fn,
).
Fn,
).Fn,
).Fn,
).Fn,
).Fn,
).Fn,
).Fn,
).Fn(gtx)
}
}
func (wg *WalletGUI) OverviewPage() l.Widget {
if wg.RecentTxsWidget == nil {
wg.RecentTxsWidget = func(gtx l.Context) l.Dimensions {
return l.Dimensions{Size: gtx.Constraints.Max}
}
}
return func(gtx l.Context) l.Dimensions {
return wg.Responsive(
wg.Size.Load(), gel.Widgets{
{
Size: 0,
Widget:
wg.VFlex().AlignStart().
Rigid(
// wg.ButtonInset(0.25,
wg.VFlex().
Rigid(
wg.Inset(
0.25,
wg.balanceCard(),
).Fn,
).Fn,
// ).Fn,
).
// Rigid(wg.Inset(0.25, gel.EmptySpace(0, 0)).Fn).
Flexed(
1,
wg.VFlex().AlignStart().
Rigid(
wg.Inset(
0.25,
wg.H5("Recent Transactions").Fn,
).Fn,
).
Flexed(
1,
// wg.Inset(0.5,
wg.RecentTxsWidget,
// p9.EmptyMaxWidth(),
// ).Fn,
).
Fn,
).
Fn,
},
{
Size: 64,
Widget: wg.Flex().AlignStart().
Rigid(
// wg.ButtonInset(0.25,
wg.VFlex(). // SpaceSides().AlignStart().
Rigid(
wg.Inset(
0.25,
wg.balanceCard(),
).Fn,
).Fn,
// ).Fn,
).
// Rigid(wg.Inset(0.25, gel.EmptySpace(0, 0)).Fn).
Flexed(
1,
// wg.Inset(
// 0.25,
wg.VFlex().AlignStart().
Rigid(
wg.Inset(
0.25,
wg.H5("Recent transactions").Fn,
).Fn,
).
Flexed(
1,
// wg.Fill("DocBg", l.W, wg.TextSize.True, 0, wg.Inset(0.25,
wg.RecentTxsWidget,
// p9.EmptyMaxWidth(),
// ).Fn).Fn,
).
Fn,
// ).
// Fn,
).
Fn,
},
},
).Fn(gtx)
}
}
func (wg *WalletGUI) recentTxCardStub(txs *btcjson.ListTransactionsResult) l.Widget {
return wg.Inset(
0.25,
wg.Flex().
// AlignBaseline().
// AlignStart().
// SpaceEvenly().
SpaceBetween().
// Flexed(
// 1,
// wg.Inset(
// 0.25,
// wg.Caption(txs.Address).
// Font("go regular").
// Color("PanelText").
// TextScale(0.66).
// Alignment(text.End).
// Fn,
// ).Fn,
// ).
Rigid(
wg.Caption(fmt.Sprintf("%-6.8f DUO", txs.Amount)).Font("go regular").Color("DocText").Fn,
).
Rigid(
wg.Flex().
Rigid(
wg.Icon().Color("PanelText").Scale(1).Src(&icons2.DeviceWidgets).Fn,
).
Rigid(
wg.Caption(fmt.Sprint(txs.BlockIndex)).Fn,
// wg.buttonIconText(txs.clickBlock,
// fmt.Sprint(*txs.BlockIndex),
// &icons2.DeviceWidgets,
// wg.blockPage(*txs.BlockIndex)),
).
Fn,
).
Rigid(
wg.Flex().
Rigid(
wg.Icon().Color("PanelText").Scale(1).Src(&icons2.ActionCheckCircle).Fn,
).
Rigid(
wg.Caption(fmt.Sprintf("%d ", txs.Confirmations)).Fn,
).
Fn,
).
Rigid(
wg.Flex().
Rigid(
func(gtx l.Context) l.Dimensions {
switch txs.Category {
case "generate":
return wg.Icon().Color("PanelText").Scale(1).Src(&icons2.ActionStars).Fn(gtx)
case "immature":
return wg.Icon().Color("PanelText").Scale(1).Src(&icons2.ImageTimeLapse).Fn(gtx)
case "receive":
return wg.Icon().Color("PanelText").Scale(1).Src(&icons2.ActionPlayForWork).Fn(gtx)
case "unknown":
return wg.Icon().Color("PanelText").Scale(1).Src(&icons2.AVNewReleases).Fn(gtx)
}
return l.Dimensions{}
},
).
Rigid(
wg.Caption(txs.Category+" ").Fn,
).
Fn,
).
// Flexed(1, gel.EmptyMaxWidth()).
Rigid(
wg.Flex().
Rigid(
wg.Icon().Color("PanelText").Scale(1).Src(&icons2.DeviceAccessTime).Fn,
).
Rigid(
wg.Caption(
time.Unix(
txs.Time,
0,
).Format("02 Jan 06 15:04:05 MST"),
).Font("go regular").
// Alignment(text.End).
Color("PanelText").Fn,
).
Fn,
).
Fn,
).
Fn
}
func (wg *WalletGUI) recentTxCardSummary(txs *btcjson.ListTransactionsResult) l.Widget {
return wg.VFlex().AlignStart().SpaceBetween().
Rigid(
// wg.Inset(
// 0.25,
wg.Flex().AlignStart().SpaceBetween().
Rigid(
wg.H6(fmt.Sprintf("%-.8f DUO", txs.Amount)).Alignment(text.Start).Color("PanelText").Fn,
).
Flexed(
1,
wg.Inset(
0.25,
wg.Caption(txs.Address).
Font("go regular").
Color("PanelText").
TextScale(0.66).
Alignment(text.End).
Fn,
).Fn,
).Fn,
// ).Fn,
).
Rigid(
// wg.Inset(
// 0.25,
wg.Flex().
Flexed(
1,
wg.Flex().
Rigid(
wg.Flex().
Rigid(
wg.Icon().Color("PanelText").Scale(1).Src(&icons2.DeviceWidgets).Fn,
).
// Rigid(
// wg.Caption(fmt.Sprint(*txs.BlockIndex)).Fn,
// // wg.buttonIconText(txs.clickBlock,
// // fmt.Sprint(*txs.BlockIndex),
// // &icons2.DeviceWidgets,
// // wg.blockPage(*txs.BlockIndex)),
// ).
Rigid(
wg.Caption(fmt.Sprintf("%d ", txs.BlockIndex)).Fn,
).
Fn,
).
Rigid(
wg.Flex().
Rigid(
wg.Icon().Color("PanelText").Scale(1).Src(&icons2.ActionCheckCircle).Fn,
).
Rigid(
wg.Caption(fmt.Sprintf("%d ", txs.Confirmations)).Fn,
).
Fn,
).
Rigid(
wg.Flex().
Rigid(
func(gtx l.Context) l.Dimensions {
switch txs.Category {
case "generate":
return wg.Icon().Color("PanelText").Scale(1).Src(&icons2.ActionStars).Fn(gtx)
case "immature":
return wg.Icon().Color("PanelText").Scale(1).Src(&icons2.ImageTimeLapse).Fn(gtx)
case "receive":
return wg.Icon().Color("PanelText").Scale(1).Src(&icons2.ActionPlayForWork).Fn(gtx)
case "unknown":
return wg.Icon().Color("PanelText").Scale(1).Src(&icons2.AVNewReleases).Fn(gtx)
}
return l.Dimensions{}
},
).
Rigid(
wg.Caption(txs.Category+" ").Fn,
).
Fn,
).
Rigid(
wg.Flex().
Rigid(
wg.Icon().Color("PanelText").Scale(1).Src(&icons2.DeviceAccessTime).Fn,
).
Rigid(
wg.Caption(
time.Unix(
txs.Time,
0,
).Format("02 Jan 06 15:04:05 MST"),
).Color("PanelText").Fn,
).
Fn,
).Fn,
).Fn,
// ).Fn,
).Fn
}
func (wg *WalletGUI) recentTxCardSummaryButton(
txs *btcjson.ListTransactionsResult,
clickable *gel.Clickable,
bgColor string, back bool,
) l.Widget {
return wg.ButtonLayout(
clickable.SetClick(
func() {
D.Ln("clicked tx")
// D.S(txs)
curr := wg.openTxID.Load()
if curr == txs.TxID {
wg.prevOpenTxID.Store(wg.openTxID.Load())
wg.openTxID.Store("")
moveto := wg.originTxDetail
if moveto == "" {
moveto = wg.MainApp.ActivePageGet()
}
wg.MainApp.ActivePage(moveto)
} else {
if wg.MainApp.ActivePageGet() == "home" {
wg.originTxDetail = "home"
wg.MainApp.ActivePage("history")
} else {
wg.originTxDetail = "history"
}
wg.openTxID.Store(txs.TxID)
}
},
),
).
Background(bgColor).
Embed(
gel.If(
back,
wg.Flex().
Rigid(
wg.Icon().Color("PanelText").Scale(4).Src(&icons2.NavigationArrowBack).Fn,
).
Rigid(
wg.Inset(0.5, gel.EmptyMinWidth()).Fn,
).
Flexed(
1,
wg.Fill(
"DocBg", l.Center, 0, 0, wg.Inset(
0.5,
wg.recentTxCardSummary(txs),
).Fn,
).Fn,
).
Fn,
wg.Flex().
Rigid(
wg.Inset(0.5, gel.EmptyMaxHeight()).Fn,
).
Flexed(
1,
wg.Fill(
"DocBg", l.Center, 0, 0, wg.Inset(
0.5,
wg.recentTxCardSummary(txs),
).Fn,
).Fn,
).
Fn,
),
).Fn
}
func (wg *WalletGUI) recentTxCardSummaryButtonGenerate(
txs *btcjson.ListTransactionsResult,
clickable *gel.Clickable,
bgColor string, back bool,
) l.Widget {
return wg.ButtonLayout(
clickable.SetClick(
func() {
D.Ln("clicked tx")
// D.S(txs)
curr := wg.openTxID.Load()
if curr == txs.TxID {
wg.prevOpenTxID.Store(wg.openTxID.Load())
wg.openTxID.Store("")
moveto := wg.originTxDetail
if moveto == "" {
moveto = wg.MainApp.ActivePageGet()
}
wg.MainApp.ActivePage(moveto)
} else {
if wg.MainApp.ActivePageGet() == "home" {
wg.originTxDetail = "home"
wg.MainApp.ActivePage("history")
} else {
wg.originTxDetail = "history"
}
wg.openTxID.Store(txs.TxID)
}
},
),
).
Background(bgColor).
Embed(
wg.Flex().AlignStart().
Rigid(
// wg.Fill(
// "Primary", l.W, 0, 0, wg.Inset(
// 0.5,
gel.If(
back,
wg.Flex().AlignStart().
Rigid(
wg.Icon().Color("PanelText").Scale(4).Src(&icons2.NavigationArrowBack).Fn,
).
Flexed(
1,
wg.recentTxCardSummary(txs),
).
Fn,
wg.Flex().AlignStart().
Flexed(
1,
wg.recentTxCardStub(txs),
).
Fn,
// wg.Flex().
// Rigid(
// wg.Inset(0.5, gel.EmptyMaxHeight()).Fn,
// ).
// Flexed(
// 1,
// wg.Fill(
// "DocBg", l.Center, 0, 0, wg.Inset(
// 0.5,
// wg.recentTxCardSummary(txs),
// ).Fn,
// ).Fn,
// ).
// Fn,
),
).Fn,
// ).Fn,
// ).Fn,
).Fn
}
func (wg *WalletGUI) recentTxCardDetail(txs *btcjson.ListTransactionsResult, clickable *gel.Clickable) l.Widget {
return wg.VFlex().
Rigid(
wg.Fill(
"Primary", l.Center, wg.TextSize.V, 0,
wg.recentTxCardSummaryButton(txs, clickable, "Primary", false),
).Fn,
// ).
// Rigid(
// wg.Fill(
// "DocBg", l.Center, wg.TextSize.True, 0,
// wg.Flex().
// Flexed(
// 1,
// wg.Inset(
// 0.25,
// wg.VFlex().
// Rigid(wg.Inset(0.25, gel.EmptySpace(0, 0)).Fn).
// Rigid(
// wg.H6("Transaction Details").
// Color("PanelText").
// Fn,
// ).
// Rigid(
// wg.Inset(
// 0.25,
// wg.VFlex().
// Rigid(
// wg.txDetailEntry("Transaction ID", txs.TxID),
// ).
// Rigid(
// wg.txDetailEntry("Address", txs.Address),
// ).
// Rigid(
// wg.txDetailEntry("Amount", fmt.Sprintf("%0.8f", txs.Amount)),
// ).
// Rigid(
// wg.txDetailEntry("In Block", fmt.Sprint(txs.BlockIndex)),
// ).
// Rigid(
// wg.txDetailEntry("First Mined", fmt.Sprint(txs.BlockTime)),
// ).
// Rigid(
// wg.txDetailEntry("Category", txs.Category),
// ).
// Rigid(
// wg.txDetailEntry("Confirmations", fmt.Sprint(txs.Confirmations)),
// ).
// Rigid(
// wg.txDetailEntry("Fee", fmt.Sprintf("%0.8f", txs.Fee)),
// ).
// Rigid(
// wg.txDetailEntry("Confirmations", fmt.Sprint(txs.Confirmations)),
// ).
// Rigid(
// wg.txDetailEntry("Involves Watch Only", fmt.Sprint(txs.InvolvesWatchOnly)),
// ).
// Rigid(
// wg.txDetailEntry("Time", fmt.Sprint(txs.Time)),
// ).
// Rigid(
// wg.txDetailEntry("Time Received", fmt.Sprint(txs.TimeReceived)),
// ).
// Rigid(
// wg.txDetailEntry("Trusted", fmt.Sprint(txs.Trusted)),
// ).
// Rigid(
// wg.txDetailEntry("Abandoned", fmt.Sprint(txs.Abandoned)),
// ).
// Rigid(
// wg.txDetailEntry("BIP125 Replaceable", fmt.Sprint(txs.BIP125Replaceable)),
// ).
// Fn,
// ).Fn,
// ).Fn,
// ).Fn,
// ).Fn,
// ).Fn,
).Fn
}
func (wg *WalletGUI) txDetailEntry(name, detail string, bgColor string, small bool) l.Widget {
content := wg.Body1
if small {
content = wg.Caption
}
return wg.Fill(
bgColor, l.Center, wg.TextSize.V, 0,
wg.Flex().AlignBaseline().
Flexed(
0.25,
wg.Inset(
0.25,
wg.Body1(name).
Color("PanelText").
Font("bariol bold").
Fn,
).Fn,
).
Flexed(
0.75,
wg.Flex().SpaceStart().Rigid(
wg.Inset(
0.25,
content(detail).Font("go regular").
Color("PanelText").
Fn,
).Fn,
).Fn,
).Fn,
).Fn
}
// RecentTransactions generates a display showing recent transactions
//
// fields to use: Address, Amount, BlockIndex, BlockTime, Category, Confirmations, Generated
func (wg *WalletGUI) RecentTransactions(n int, listName string) l.Widget {
wg.txMx.Lock()
defer wg.txMx.Unlock()
// wg.ready.Store(false)
var out []l.Widget
// first := true
// out = append(out)
var txList []btcjson.ListTransactionsResult
var clickables []*gel.Clickable
txList = wg.txHistoryList
switch listName {
case "history":
clickables = wg.txHistoryClickables
case "recent":
// txList = wg.txRecentList
clickables = wg.recentTxsClickables
}
ltxl := len(txList)
ltc := len(clickables)
if ltxl > ltc {
count := ltxl - ltc
for ; count > 0; count-- {
clickables = append(clickables, wg.Clickable())
}
}
if len(clickables) == 0 {
return func(gtx l.Context) l.Dimensions {
return l.Dimensions{Size: gtx.Constraints.Max}
}
}
D.Ln(">>>>>>>>>>>>>>>> iterating transactions", n, listName)
var collected int
for x := range txList {
if collected >= n && n > 0 {
break
}
txs := txList[x]
switch listName {
case "history":
collected++
case "recent":
if txs.Category == "generate" || txs.Category == "immature" || txs.Amount < 0 && txs.Fee == 0 {
continue
} else {
collected++
}
}
// spacer
// if !first {
// out = append(
// out,
// wg.Inset(0.25, gel.EmptyMaxWidth()).Fn,
// )
// } else {
// first = false
// }
ck := clickables[x]
out = append(
out,
func(gtx l.Context) l.Dimensions {
return gel.If(
txs.Category == "immature",
wg.recentTxCardSummaryButtonGenerate(&txs, ck, "DocBg", false),
gel.If(
txs.Category == "send",
wg.recentTxCardSummaryButton(&txs, ck, "Danger", false),
gel.If(
txs.Category == "receive",
wg.recentTxCardSummaryButton(&txs, ck, "Success", false),
gel.If(
txs.Category == "generate",
wg.recentTxCardSummaryButtonGenerate(&txs, ck, "DocBg", false),
gel.If(
wg.prevOpenTxID.Load() == txs.TxID,
wg.recentTxCardSummaryButton(&txs, ck, "Primary", false),
wg.recentTxCardSummaryButton(&txs, ck, "DocBg", false),
),
),
),
),
)(gtx)
},
)
// out = append(out,
// wg.Caption(txs.TxID).
// Font("go regular").
// Color("PanelText").
// TextScale(0.5).Fn,
// )
// out = append(
// out,
// wg.Fill(
// "DocBg", l.W, 0, 0,
//
// ).Fn,
// )
}
le := func(gtx l.Context, index int) l.Dimensions {
return wg.Inset(
0.25,
out[index],
).Fn(gtx)
}
wo := func(gtx l.Context) l.Dimensions {
return wg.VFlex().AlignStart().
Rigid(
wg.lists[listName].
Vertical().
Length(len(out)).
ListElement(le).
Fn,
).Fn(gtx)
}
D.Ln(">>>>>>>>>>>>>>>> history widget completed", n, listName)
switch listName {
case "history":
wg.TxHistoryWidget = wo
case "recent":
wg.RecentTxsWidget = wo
}
return func(gtx l.Context) l.Dimensions {
return wo(gtx)
}
}
func leftPadTo(length, limit int, txt string) string {
if len(txt) > limit {
return txt[:limit]
}
if len(txt) == limit {
return txt
}
pad := length - len(txt)
return strings.Repeat(" ", pad) + txt
}

346
cmd/gui/receive.go Normal file
View File

@@ -0,0 +1,346 @@
package gui
import (
"fmt"
"strconv"
"github.com/p9c/p9/pkg/amt"
"github.com/atotto/clipboard"
l "github.com/p9c/p9/pkg/gel/gio/layout"
"github.com/p9c/p9/pkg/gel/gio/text"
"github.com/p9c/p9/pkg/gel"
)
const Break1 = 48
type ReceivePage struct {
wg *WalletGUI
inputWidth, break1 float32
sm, md, lg, xl l.Widget
urn string
}
func (wg *WalletGUI) GetReceivePage() (rp *ReceivePage) {
rp = &ReceivePage{
wg: wg,
inputWidth: 17,
break1: 48,
}
rp.sm = rp.SmallList
return
}
func (rp *ReceivePage) Fn(gtx l.Context) l.Dimensions {
wg := rp.wg
return wg.Responsive(
wg.Size.Load(), gel.Widgets{
{
Widget: rp.SmallList,
},
{
Size: rp.break1,
Widget: rp.MediumList,
},
},
).Fn(gtx)
}
func (rp *ReceivePage) SmallList(gtx l.Context) l.Dimensions {
wg := rp.wg
smallWidgets := []l.Widget{
wg.Direction().Center().Embed(rp.QRButton()).Fn,
rp.InputMessage(),
rp.AmountInput(),
rp.MessageInput(),
rp.RegenerateButton(),
rp.AddressbookHeader(),
}
smallWidgets = append(smallWidgets, rp.GetAddressbookHistoryCards("DocBg")...)
le := func(gtx l.Context, index int) l.Dimensions {
return wg.Inset(0.25, smallWidgets[index]).Fn(gtx)
}
return wg.VFlex().AlignStart().
Flexed(
1,
wg.lists["receive"].
Vertical().Start().
Length(len(smallWidgets)).
ListElement(le).Fn,
).Fn(gtx)
}
func (rp *ReceivePage) InputMessage() l.Widget {
return rp.wg.Body2("Input details to request a payment").Alignment(text.Middle).Fn
}
func (rp *ReceivePage) MediumList(gtx l.Context) l.Dimensions {
wg := rp.wg
qrWidget := []l.Widget{
wg.Direction().Center().Embed(rp.QRButton()).Fn,
rp.InputMessage(),
rp.AmountInput(),
rp.MessageInput(),
rp.RegenerateButton(),
// rp.AddressbookHeader(),
}
qrLE := func(gtx l.Context, index int) l.Dimensions {
return wg.Inset(0.25, qrWidget[index]).Fn(gtx)
}
var historyWidget []l.Widget
historyWidget = append(historyWidget, rp.GetAddressbookHistoryCards("DocBg")...)
historyLE := func(gtx l.Context, index int) l.Dimensions {
return wg.Inset(
0.25,
historyWidget[index],
).Fn(gtx)
}
return wg.Flex().AlignStart().
Rigid(
func(gtx l.Context) l.Dimensions {
gtx.Constraints.Max.X, gtx.Constraints.Min.X = int(wg.TextSize.V*rp.inputWidth),
int(wg.TextSize.V*rp.inputWidth)
return wg.VFlex().
Rigid(
wg.lists["receiveMedium"].
Vertical().
Length(len(qrWidget)).
ListElement(qrLE).Fn,
).Fn(gtx)
},
).
Rigid(
wg.VFlex().AlignStart().
Rigid(
rp.AddressbookHeader(),
).
Rigid(
wg.lists["receiveAddresses"].
Vertical().
Length(len(historyWidget)).
ListElement(historyLE).Fn,
).
Fn,
).Fn(gtx)
}
func (rp *ReceivePage) Spacer() l.Widget {
return rp.wg.Flex().AlignMiddle().Flexed(1, rp.wg.Inset(0.25, gel.EmptySpace(0, 0)).Fn).Fn
}
func (rp *ReceivePage) GetAddressbookHistoryCards(bg string) (widgets []l.Widget) {
wg := rp.wg
avail := len(wg.receiveAddressbookClickables)
req := len(wg.State.receiveAddresses)
if req > avail {
for i := 0; i < req-avail; i++ {
wg.receiveAddressbookClickables = append(wg.receiveAddressbookClickables, wg.WidgetPool.GetClickable())
}
}
for x := range wg.State.receiveAddresses {
j := x
i := len(wg.State.receiveAddresses) - 1 - x
widgets = append(
widgets, func(gtx l.Context) l.Dimensions {
return wg.ButtonLayout(
wg.receiveAddressbookClickables[i].SetClick(
func() {
msg := wg.State.receiveAddresses[i].Message
if len(msg) > 64 {
msg = msg[:64]
}
qrText := fmt.Sprintf(
"parallelcoin:%s?amount=%8.8f&message=%s",
wg.State.receiveAddresses[i].Address,
wg.State.receiveAddresses[i].Amount.ToDUO(),
msg,
)
D.Ln("clicked receive address list item", j)
if e := clipboard.WriteAll(qrText); E.Chk(e) {
}
wg.GetNewReceivingQRCode(qrText)
rp.urn = qrText
},
),
).
Background(bg).
Embed(
wg.Inset(
0.25,
wg.VFlex().AlignStart().
Rigid(
wg.Flex().AlignBaseline().
Rigid(
wg.Caption(wg.State.receiveAddresses[i].Address).
Font("go regular").Fn,
).
Flexed(
1,
wg.Body1(wg.State.receiveAddresses[i].Amount.String()).
Alignment(text.End).Fn,
).
Fn,
).
Rigid(
wg.Caption(wg.State.receiveAddresses[i].Message).MaxLines(1).Fn,
).
Fn,
).
Fn,
).Fn(gtx)
},
)
}
return
}
func (rp *ReceivePage) QRMessage() l.Widget {
return rp.wg.Body2("Scan to send or click to copy").Alignment(text.Middle).Fn
}
func (rp *ReceivePage) GetQRText() string {
wg := rp.wg
msg := wg.inputs["receiveMessage"].GetText()
if len(msg) > 64 {
msg = msg[:64]
}
return fmt.Sprintf(
"parallelcoin:%s?amount=%s&message=%s",
wg.State.currentReceivingAddress.Load().EncodeAddress(),
wg.inputs["receiveAmount"].GetText(),
msg,
)
}
func (rp *ReceivePage) QRButton() l.Widget {
wg := rp.wg
if !wg.WalletAndClientRunning() || wg.currentReceiveQRCode == nil {
return func(gtx l.Context) l.Dimensions {
return l.Dimensions{}
}
}
return wg.VFlex().
Rigid(
wg.ButtonLayout(
wg.currentReceiveCopyClickable.SetClick(
func() {
D.Ln("clicked qr code copy clicker")
if e := clipboard.WriteAll(rp.urn); E.Chk(e) {
}
},
),
).
Background("white").
Embed(
wg.Inset(
0.125,
wg.Image().Src(*wg.currentReceiveQRCode).Scale(1).Fn,
).Fn,
).Fn,
).Rigid(
rp.QRMessage(),
).Fn
}
func (rp *ReceivePage) AddressbookHeader() l.Widget {
wg := rp.wg
return wg.Flex().
Rigid(
wg.Inset(
0.25,
wg.H5("Receive Address History").Alignment(text.Middle).Fn,
).Fn,
).Fn
}
func (rp *ReceivePage) AmountInput() l.Widget {
return func(gtx l.Context) l.Dimensions {
wg := rp.wg
// gtx.Constraints.Max.X, gtx.Constraints.Min.X = int(wg.TextSize.True*rp.inputWidth), int(wg.TextSize.True*rp.inputWidth)
return wg.inputs["receiveAmount"].Fn(gtx)
}
}
func (rp *ReceivePage) MessageInput() l.Widget {
return func(gtx l.Context) l.Dimensions {
wg := rp.wg
// gtx.Constraints.Max.X, gtx.Constraints.Min.X = int(wg.TextSize.True*rp.inputWidth), int(wg.TextSize.True*rp.inputWidth)
return wg.inputs["receiveMessage"].Fn(gtx)
}
}
func (rp *ReceivePage) RegenerateButton() l.Widget {
return func(gtx l.Context) l.Dimensions {
wg := rp.wg
if wg.inputs["receiveAmount"].GetText() == "" || wg.inputs["receiveMessage"].GetText() == "" {
gtx.Queue = nil
}
// gtx.Constraints.Max.X, gtx.Constraints.Min.X = int(wg.TextSize.True*rp.inputWidth), int(wg.TextSize.True*rp.inputWidth)
return wg.ButtonLayout(
wg.currentReceiveRegenClickable.
SetClick(
func() {
D.Ln("clicked regenerate button")
var amount float64
var am amt.Amount
var e error
if amount, e = strconv.ParseFloat(
wg.inputs["receiveAmount"].GetText(),
64,
); !E.Chk(e) {
if am, e = amt.NewAmount(amount); E.Chk(e) {
}
}
msg := wg.inputs["receiveMessage"].GetText()
if am == 0 || msg == "" {
// never store an entry without both fields filled
return
}
if len(wg.State.receiveAddresses) > 0 &&
(wg.State.receiveAddresses[len(wg.State.receiveAddresses)-1].Amount == 0 ||
wg.State.receiveAddresses[len(wg.State.receiveAddresses)-1].Message == "") {
// the first entry has neither of these, and newly generated items without them are assumed to
// not be intentional or used addresses so we don't generate a new entry for this case
wg.State.receiveAddresses[len(wg.State.receiveAddresses)-1].Amount = am
wg.State.receiveAddresses[len(wg.State.receiveAddresses)-1].Message = msg
} else {
// go func() {
wg.GetNewReceivingAddress()
msg := wg.inputs["receiveMessage"].GetText()
if len(msg) > 64 {
msg = msg[:64]
// enforce the field length limit
wg.inputs["receiveMessage"].SetText(msg)
}
qrText := fmt.Sprintf(
"parallelcoin:%s?amount=%f&message=%s",
wg.State.currentReceivingAddress.Load().EncodeAddress(),
am.ToDUO(),
msg,
)
rp.urn = qrText
wg.GetNewReceivingQRCode(rp.urn)
// }()
}
// force user to fill fields again after regenerate to stop duplicate entries especially from
// accidental double clicks/taps
wg.inputs["receiveAmount"].SetText("")
wg.inputs["receiveMessage"].SetText("")
wg.Invalidate()
},
),
).
Background("Primary").
Embed(
wg.Inset(
0.5,
wg.H6("regenerate").Color("Light").Fn,
).
Fn,
).
Fn(gtx)
}
}

85
cmd/gui/regenerate.go Normal file
View File

@@ -0,0 +1,85 @@
package gui
import (
"image"
"path/filepath"
"strconv"
"time"
"github.com/p9c/p9/pkg/amt"
"github.com/p9c/p9/pkg/btcaddr"
"github.com/atotto/clipboard"
"github.com/p9c/p9/pkg/gel/gio/op/paint"
"github.com/p9c/p9/pkg/qrcode"
)
func (wg *WalletGUI) GetNewReceivingAddress() {
D.Ln("GetNewReceivingAddress")
var addr btcaddr.Address
var e error
if addr, e = wg.WalletClient.GetNewAddress("default"); !E.Chk(e) {
D.Ln(
"getting new receiving address", addr.EncodeAddress(),
"previous:", wg.State.currentReceivingAddress.String.Load(),
)
// save to addressbook
var ae AddressEntry
ae.Address = addr.EncodeAddress()
var amount float64
if amount, e = strconv.ParseFloat(
wg.inputs["receiveAmount"].GetText(),
64,
); !E.Chk(e) {
if ae.Amount, e = amt.NewAmount(amount); E.Chk(e) {
}
}
msg := wg.inputs["receiveMessage"].GetText()
if len(msg) > 64 {
msg = msg[:64]
}
ae.Message = msg
ae.Created = time.Now()
if wg.State.IsReceivingAddress() {
wg.State.receiveAddresses = append(wg.State.receiveAddresses, ae)
} else {
wg.State.receiveAddresses = []AddressEntry{ae}
wg.State.isAddress.Store(true)
}
D.S(wg.State.receiveAddresses)
wg.State.SetReceivingAddress(addr)
filename := filepath.Join(wg.cx.Config.DataDir.V(), "state.json")
if e = wg.State.Save(filename, wg.cx.Config.WalletPass.Bytes(), false); E.Chk(e) {
}
wg.Invalidate()
}
}
func (wg *WalletGUI) GetNewReceivingQRCode(qrText string) {
wg.currentReceiveRegenerate.Store(false)
var qrc image.Image
D.Ln("generating QR code")
var e error
if qrc, e = qrcode.Encode(qrText, 0, qrcode.ECLevelL, 4); !E.Chk(e) {
iop := paint.NewImageOp(qrc)
wg.currentReceiveQRCode = &iop
wg.currentReceiveQR = wg.ButtonLayout(
wg.currentReceiveCopyClickable.SetClick(
func() {
D.Ln("clicked qr code copy clicker")
if e := clipboard.WriteAll(qrText); E.Chk(e) {
}
},
),
).
Background("white").
Embed(
wg.Inset(
0.125,
wg.Image().Src(*wg.currentReceiveQRCode).Scale(1).Fn,
).Fn,
).Fn
}
}

498
cmd/gui/send.go Normal file
View File

@@ -0,0 +1,498 @@
package gui
import (
"fmt"
"strconv"
"strings"
"time"
"github.com/p9c/p9/pkg/amt"
"github.com/p9c/p9/pkg/btcaddr"
"github.com/atotto/clipboard"
l "github.com/p9c/p9/pkg/gel/gio/layout"
"github.com/p9c/p9/pkg/gel/gio/text"
"github.com/p9c/p9/pkg/gel"
"github.com/p9c/p9/pkg/chainhash"
)
type SendPage struct {
wg *WalletGUI
inputWidth, break1 float32
}
func (wg *WalletGUI) GetSendPage() (sp *SendPage) {
sp = &SendPage{
wg: wg,
inputWidth: 17,
break1: 48,
}
wg.inputs["sendAddress"].SetPasteFunc = sp.pasteFunction
wg.inputs["sendAmount"].SetPasteFunc = sp.pasteFunction
wg.inputs["sendMessage"].SetPasteFunc = sp.pasteFunction
return
}
func (sp *SendPage) Fn(gtx l.Context) l.Dimensions {
wg := sp.wg
return wg.Responsive(
wg.Size.Load(), gel.Widgets{
{
Widget: sp.SmallList,
},
{
Size: sp.break1,
Widget: sp.MediumList,
},
},
).Fn(gtx)
}
func (sp *SendPage) SmallList(gtx l.Context) l.Dimensions {
wg := sp.wg
smallWidgets := []l.Widget{
wg.Flex().Rigid(wg.balanceCard()).Fn,
sp.InputMessage(),
sp.AddressInput(),
sp.AmountInput(),
sp.MessageInput(),
wg.Flex().
Flexed(
1,
sp.SendButton(),
).
Rigid(
wg.Inset(0.5, gel.EmptySpace(0, 0)).Fn,
).
Rigid(
sp.PasteButton(),
).
Rigid(
wg.Inset(0.5, gel.EmptySpace(0, 0)).Fn,
).
Rigid(
sp.SaveButton(),
).Fn,
sp.AddressbookHeader(),
}
smallWidgets = append(smallWidgets, sp.GetAddressbookHistoryCards("DocBg")...)
le := func(gtx l.Context, index int) l.Dimensions {
return wg.Inset(
0.25,
smallWidgets[index],
).Fn(gtx)
}
return wg.lists["send"].
Vertical().
Length(len(smallWidgets)).
ListElement(le).Fn(gtx)
}
func (sp *SendPage) InputMessage() l.Widget {
return sp.wg.Body2("Enter or paste the details for a payment").Alignment(text.Start).Fn
}
func (sp *SendPage) MediumList(gtx l.Context) l.Dimensions {
wg := sp.wg
sendFormWidget := []l.Widget{
wg.balanceCard(),
sp.InputMessage(),
sp.AddressInput(),
sp.AmountInput(),
sp.MessageInput(),
wg.Flex().
Flexed(
1,
sp.SendButton(),
).
Rigid(
wg.Inset(0.5, gel.EmptySpace(0, 0)).Fn,
).
Rigid(
sp.PasteButton(),
).
Rigid(
wg.Inset(0.5, gel.EmptySpace(0, 0)).Fn,
).
Rigid(
sp.SaveButton(),
).Fn,
}
sendLE := func(gtx l.Context, index int) l.Dimensions {
return wg.Inset(0.25, sendFormWidget[index]).Fn(gtx)
}
var historyWidget []l.Widget
historyWidget = append(historyWidget, sp.GetAddressbookHistoryCards("DocBg")...)
historyLE := func(gtx l.Context, index int) l.Dimensions {
return wg.Inset(
0.25,
historyWidget[index],
).Fn(gtx)
}
return wg.Flex().AlignStart().
Rigid(
func(gtx l.Context) l.Dimensions {
gtx.Constraints.Max.X =
int(wg.TextSize.V * sp.inputWidth)
// gtx.Constraints.Min.X = int(wg.TextSize.True * sp.inputWidth)
return wg.VFlex().AlignStart().
Rigid(
wg.lists["sendMedium"].
Vertical().
Length(len(sendFormWidget)).
ListElement(sendLE).Fn,
).Fn(gtx)
},
).
// Rigid(wg.Inset(0.25, gel.EmptySpace(0, 0)).Fn).
Flexed(
1,
wg.VFlex().AlignStart().
Rigid(
sp.AddressbookHeader(),
).
Rigid(
wg.lists["sendAddresses"].
Vertical().
Length(len(historyWidget)).
ListElement(historyLE).Fn,
).Fn,
).Fn(gtx)
}
func (sp *SendPage) AddressInput() l.Widget {
return func(gtx l.Context) l.Dimensions {
wg := sp.wg
return wg.inputs["sendAddress"].Fn(gtx)
}
}
func (sp *SendPage) AmountInput() l.Widget {
return func(gtx l.Context) l.Dimensions {
wg := sp.wg
return wg.inputs["sendAmount"].Fn(gtx)
}
}
func (sp *SendPage) MessageInput() l.Widget {
return func(gtx l.Context) l.Dimensions {
wg := sp.wg
return wg.inputs["sendMessage"].Fn(gtx)
}
}
func (sp *SendPage) SendButton() l.Widget {
return func(gtx l.Context) l.Dimensions {
wg := sp.wg
if wg.inputs["sendAmount"].GetText() == "" || wg.inputs["sendMessage"].GetText() == "" ||
wg.inputs["sendAddress"].GetText() == "" {
gtx.Queue = nil
}
return wg.ButtonLayout(
wg.clickables["sendSend"].
SetClick(
func() {
D.Ln("clicked send button")
go func() {
if wg.WalletAndClientRunning() {
var amount float64
var am amt.Amount
var e error
if amount, e = strconv.ParseFloat(
wg.inputs["sendAmount"].GetText(),
64,
); !E.Chk(e) {
if am, e = amt.NewAmount(amount); E.Chk(e) {
D.Ln(">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>", e)
// todo: indicate this to the user somehow
return
}
} else {
// todo: indicate this to the user somehow
return
}
var addr btcaddr.Address
if addr, e = btcaddr.Decode(
wg.inputs["sendAddress"].GetText(),
wg.cx.ActiveNet,
); E.Chk(e) {
D.Ln(">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>", e)
D.Ln("invalid address")
// TODO: indicate this to the user somehow
return
}
if e = wg.WalletClient.WalletPassphrase(wg.cx.Config.WalletPass.V(), 5); E.Chk(e) {
D.Ln(">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>", e)
return
}
var txid *chainhash.Hash
if txid, e = wg.WalletClient.SendToAddress(addr, am); E.Chk(e) {
// TODO: indicate send failure to user somehow
D.Ln(">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>", e)
return
}
wg.RecentTransactions(10, "recent")
wg.RecentTransactions(-1, "history")
wg.Invalidate()
D.Ln("transaction successful", txid)
sp.saveForm(txid.String())
select {
case <-time.After(time.Second * 5):
case <-wg.quit:
}
}
}()
},
),
).
Background("Primary").
Embed(
wg.Inset(
0.5,
wg.H6("send").Color("Light").Fn,
).
Fn,
).
Fn(gtx)
}
}
func (sp *SendPage) saveForm(txid string) {
wg := sp.wg
D.Ln("processing form data to save")
amtS := wg.inputs["sendAmount"].GetText()
var e error
var amount float64
if amount, e = strconv.ParseFloat(amtS, 64); E.Chk(e) {
return
}
if amount == 0 {
return
}
var ua amt.Amount
if ua, e = amt.NewAmount(amount); E.Chk(e) {
return
}
msg := wg.inputs["sendMessage"].GetText()
if msg == "" {
return
}
addr := wg.inputs["sendAddress"].GetText()
var ad btcaddr.Address
if ad, e = btcaddr.Decode(addr, wg.cx.ActiveNet); E.Chk(e) {
return
}
wg.State.sendAddresses = append(
wg.State.sendAddresses, AddressEntry{
Address: ad.EncodeAddress(),
Label: msg,
Amount: ua,
Created: time.Now(),
TxID: txid,
},
)
// prevent accidental double clicks recording the same entry again
wg.inputs["sendAmount"].SetText("")
wg.inputs["sendMessage"].SetText("")
wg.inputs["sendAddress"].SetText("")
}
func (sp *SendPage) SaveButton() l.Widget {
return func(gtx l.Context) l.Dimensions {
wg := sp.wg
if wg.inputs["sendAmount"].GetText() == "" || wg.inputs["sendMessage"].GetText() == "" ||
wg.inputs["sendAddress"].GetText() == "" {
gtx.Queue = nil
}
return wg.ButtonLayout(
wg.clickables["sendSave"].
SetClick(
func() { sp.saveForm("") },
),
).
Background("Primary").
Embed(
wg.Inset(
0.5,
wg.H6("save").Color("Light").Fn,
).
Fn,
).
Fn(gtx)
}
}
func (sp *SendPage) PasteButton() l.Widget {
return func(gtx l.Context) l.Dimensions {
wg := sp.wg
return wg.ButtonLayout(
wg.clickables["sendFromRequest"].
SetClick(func() { sp.pasteFunction() }),
).
Background("Primary").
Embed(
wg.Inset(
0.5,
wg.H6("paste").Color("Light").Fn,
).
Fn,
).
Fn(gtx)
}
}
func (sp *SendPage) pasteFunction() (b bool) {
wg := sp.wg
D.Ln("clicked paste button")
var urn string
var e error
if urn, e = clipboard.ReadAll(); E.Chk(e) {
return
}
if !strings.HasPrefix(urn, "parallelcoin:") {
if e = clipboard.WriteAll(urn); E.Chk(e) {
}
return
}
split1 := strings.Split(urn, "parallelcoin:")
split2 := strings.Split(split1[1], "?")
addr := split2[0]
var ua btcaddr.Address
if ua, e = btcaddr.Decode(addr, wg.cx.ActiveNet); E.Chk(e) {
return
}
_ = ua
b = true
wg.inputs["sendAddress"].SetText(addr)
if len(split2) <= 1 {
return
}
split3 := strings.Split(split2[1], "&")
for i := range split3 {
var split4 []string
split4 = strings.Split(split3[i], "=")
D.Ln(split4)
if len(split4) > 1 {
switch split4[0] {
case "amount":
wg.inputs["sendAmount"].SetText(split4[1])
// D.Ln("############ amount", split4[1])
case "message", "label":
msg := split4[i]
if len(msg) > 64 {
msg = msg[:64]
}
wg.inputs["sendMessage"].SetText(msg)
// D.Ln("############ message", split4[1])
}
}
}
return
}
func (sp *SendPage) AddressbookHeader() l.Widget {
wg := sp.wg
return wg.Flex().AlignStart().
Rigid(
wg.Inset(
0.25,
wg.H5("Send Address Book").Fn,
).Fn,
).Fn
}
func (sp *SendPage) GetAddressbookHistoryCards(bg string) (widgets []l.Widget) {
wg := sp.wg
avail := len(wg.sendAddressbookClickables)
req := len(wg.State.sendAddresses)
if req > avail {
for i := 0; i < req-avail; i++ {
wg.sendAddressbookClickables = append(wg.sendAddressbookClickables, wg.WidgetPool.GetClickable())
}
}
for x := range wg.State.sendAddresses {
j := x
i := len(wg.State.sendAddresses) - 1 - x
widgets = append(
widgets, func(gtx l.Context) l.Dimensions {
return wg.ButtonLayout(
wg.sendAddressbookClickables[i].SetClick(
func() {
sendText := fmt.Sprintf(
"parallelcoin:%s?amount=%8.8f&message=%s",
wg.State.sendAddresses[i].Address,
wg.State.sendAddresses[i].Amount.ToDUO(),
wg.State.sendAddresses[i].Label,
)
D.Ln("clicked send address list item", j)
if e := clipboard.WriteAll(sendText); E.Chk(e) {
}
},
),
).
Background(bg).
Embed(
wg.Inset(
0.25,
wg.VFlex().AlignStart().
Rigid(
wg.Flex().AlignBaseline().
Rigid(
wg.Caption(wg.State.sendAddresses[i].Address).
Font("go regular").Fn,
).
Flexed(
1,
wg.Body1(wg.State.sendAddresses[i].Amount.String()).
Alignment(text.End).Fn,
).
Fn,
).
Rigid(
wg.Inset(
0.25,
wg.Body1(wg.State.sendAddresses[i].Label).MaxLines(1).Fn,
).Fn,
).
Rigid(
gel.If(
wg.State.sendAddresses[i].TxID != "",
func(ctx l.Context) l.Dimensions {
for j := range wg.txHistoryList {
if wg.txHistoryList[j].TxID == wg.State.sendAddresses[i].TxID {
return wg.Flex().Flexed(
1,
wg.VFlex().
Rigid(
wg.Flex().Flexed(
1,
wg.Caption(wg.State.sendAddresses[i].TxID).MaxLines(1).Fn,
).Fn,
).
Rigid(
wg.Body1(
fmt.Sprint(
"Confirmations: ",
wg.txHistoryList[j].Confirmations,
),
).Fn,
).Fn,
).Fn(gtx)
}
}
return func(ctx l.Context) l.Dimensions { return l.Dimensions{} }(gtx)
},
func(ctx l.Context) l.Dimensions { return l.Dimensions{} },
),
).
Fn,
).
Fn,
).Fn(gtx)
},
)
}
return
}

319
cmd/gui/state.go Normal file
View File

@@ -0,0 +1,319 @@
package gui
import (
"crypto/cipher"
"encoding/json"
"io/ioutil"
"time"
"github.com/p9c/p9/pkg/amt"
"github.com/p9c/p9/pkg/btcaddr"
"github.com/p9c/p9/pkg/chaincfg"
uberatomic "go.uber.org/atomic"
l "github.com/p9c/p9/pkg/gel/gio/layout"
"github.com/p9c/p9/pkg/btcjson"
"github.com/p9c/p9/pkg/chainhash"
"github.com/p9c/p9/pkg/gcm"
"github.com/p9c/p9/pkg/transport"
"github.com/p9c/p9/pkg/util/atom"
)
const ZeroAddress = "1111111111111111111114oLvT2"
// CategoryFilter marks which transactions to omit from the filtered transaction list
type CategoryFilter struct {
Send bool
Generate bool
Immature bool
Receive bool
Unknown bool
}
func (c *CategoryFilter) Filter(s string) (include bool) {
include = true
if c.Send && s == "send" {
include = false
}
if c.Generate && s == "generate" {
include = false
}
if c.Immature && s == "immature" {
include = false
}
if c.Receive && s == "receive" {
include = false
}
if c.Unknown && s == "unknown" {
include = false
}
return
}
type AddressEntry struct {
Address string `json:"address"`
Message string `json:"message,omitempty"`
Label string `json:"label,omitempty"`
Amount amt.Amount `json:"amount"`
Created time.Time `json:"created"`
Modified time.Time `json:"modified"`
TxID string `json:txid,omitempty'`
}
type State struct {
lastUpdated *atom.Time
bestBlockHeight *atom.Int32
bestBlockHash *atom.Hash
balance *atom.Float64
balanceUnconfirmed *atom.Float64
goroutines []l.Widget
allTxs *atom.ListTransactionsResult
filteredTxs *atom.ListTransactionsResult
filter CategoryFilter
filterChanged *atom.Bool
currentReceivingAddress *atom.Address
isAddress *atom.Bool
activePage *uberatomic.String
sendAddresses []AddressEntry
receiveAddresses []AddressEntry
}
func GetNewState(params *chaincfg.Params, activePage *uberatomic.String) *State {
fc := &atom.Bool{
Bool: uberatomic.NewBool(false),
}
return &State{
lastUpdated: atom.NewTime(time.Now()),
bestBlockHeight: &atom.Int32{Int32: uberatomic.NewInt32(0)},
bestBlockHash: atom.NewHash(chainhash.Hash{}),
balance: &atom.Float64{Float64: uberatomic.NewFloat64(0)},
balanceUnconfirmed: &atom.Float64{
Float64: uberatomic.NewFloat64(0),
},
goroutines: nil,
allTxs: atom.NewListTransactionsResult(
[]btcjson.ListTransactionsResult{},
),
filteredTxs: atom.NewListTransactionsResult(
[]btcjson.ListTransactionsResult{},
),
filter: CategoryFilter{},
filterChanged: fc,
currentReceivingAddress: atom.NewAddress(
&btcaddr.PubKeyHash{},
params,
),
isAddress: &atom.Bool{Bool: uberatomic.NewBool(false)},
activePage: activePage,
}
}
func (s *State) BumpLastUpdated() {
s.lastUpdated.Store(time.Now())
}
func (s *State) SetReceivingAddress(addr btcaddr.Address) {
s.currentReceivingAddress.Store(addr)
}
func (s *State) IsReceivingAddress() bool {
addr := s.currentReceivingAddress.String.Load()
if addr == ZeroAddress || addr == "" {
s.isAddress.Store(false)
} else {
s.isAddress.Store(true)
}
return s.isAddress.Load()
}
// Save the state to the specified file
func (s *State) Save(filename string, pass []byte, debug bool) (e error) {
D.Ln("saving state...")
marshalled := s.Marshal()
var j []byte
if j, e = json.MarshalIndent(marshalled, "", " "); E.Chk(e) {
return
}
// D.Ln(string(j))
var ciph cipher.AEAD
if ciph, e = gcm.GetCipher(pass); E.Chk(e) {
return
}
var nonce []byte
if nonce, e = transport.GetNonce(ciph); E.Chk(e) {
return
}
crypted := append(nonce, ciph.Seal(nil, nonce, j, nil)...)
var b []byte
_ = b
if b, e = ciph.Open(nil, nonce, crypted[len(nonce):], nil); E.Chk(e) {
// since it was just created it should not fail to decrypt
panic(e)
// interrupt.Request()
return
}
if e = ioutil.WriteFile(filename, crypted, 0600); E.Chk(e) {
}
if debug {
if e = ioutil.WriteFile(filename+".clear", j, 0600); E.Chk(e) {
}
}
return
}
// Load in the configuration from the specified file and decrypt using the given password
func (s *State) Load(filename string, pass []byte) (e error) {
D.Ln("loading state...")
var data []byte
var ciph cipher.AEAD
if data, e = ioutil.ReadFile(filename); E.Chk(e) {
return
}
D.Ln("cipher:", string(pass))
if ciph, e = gcm.GetCipher(pass); E.Chk(e) {
return
}
ns := ciph.NonceSize()
D.Ln("nonce size:", ns)
nonce := data[:ns]
data = data[ns:]
var b []byte
if b, e = ciph.Open(nil, nonce, data, nil); E.Chk(e) {
// interrupt.Request()
return
}
// yay, right password, now unmarshal
ss := &Marshalled{}
if e = json.Unmarshal(b, ss); E.Chk(e) {
return
}
// D.Ln(string(b))
ss.Unmarshal(s)
return
}
type Marshalled struct {
LastUpdated time.Time
BestBlockHeight int32
BestBlockHash chainhash.Hash
Balance float64
BalanceUnconfirmed float64
AllTxs []btcjson.ListTransactionsResult
Filter CategoryFilter
ReceivingAddress string
ActivePage string
ReceiveAddressBook []AddressEntry
SendAddressBook []AddressEntry
}
func (s *State) Marshal() (out *Marshalled) {
out = &Marshalled{
LastUpdated: s.lastUpdated.Load(),
BestBlockHeight: s.bestBlockHeight.Load(),
BestBlockHash: s.bestBlockHash.Load(),
Balance: s.balance.Load(),
BalanceUnconfirmed: s.balanceUnconfirmed.Load(),
AllTxs: s.allTxs.Load(),
Filter: s.filter,
ReceivingAddress: s.currentReceivingAddress.Load().EncodeAddress(),
ActivePage: s.activePage.Load(),
ReceiveAddressBook: s.receiveAddresses,
SendAddressBook: s.sendAddresses,
}
return
}
func (m *Marshalled) Unmarshal(s *State) {
s.lastUpdated.Store(m.LastUpdated)
s.bestBlockHeight.Store(m.BestBlockHeight)
s.bestBlockHash.Store(m.BestBlockHash)
s.balance.Store(m.Balance)
s.balanceUnconfirmed.Store(m.BalanceUnconfirmed)
if len(s.allTxs.Load()) < len(m.AllTxs) {
s.allTxs.Store(m.AllTxs)
}
s.receiveAddresses = m.ReceiveAddressBook
s.sendAddresses = m.SendAddressBook
s.filter = m.Filter
if m.ReceivingAddress != "1111111111111111111114oLvT2" {
var e error
var ra btcaddr.Address
if ra, e = btcaddr.Decode(m.ReceivingAddress, s.currentReceivingAddress.ForNet); E.Chk(e) {
}
s.currentReceivingAddress.Store(ra)
}
s.SetActivePage(m.ActivePage)
return
}
func (s *State) Goroutines() []l.Widget {
return s.goroutines
}
func (s *State) SetGoroutines(gr []l.Widget) {
s.goroutines = gr
}
func (s *State) SetAllTxs(atxs []btcjson.ListTransactionsResult) {
s.allTxs.Store(atxs)
// generate filtered state
filteredTxs := make([]btcjson.ListTransactionsResult, 0, len(s.allTxs.Load()))
for i := range atxs {
if s.filter.Filter(atxs[i].Category) {
filteredTxs = append(filteredTxs, atxs[i])
}
}
s.filteredTxs.Store(filteredTxs)
}
func (s *State) LastUpdated() time.Time {
return s.lastUpdated.Load()
}
func (s *State) BestBlockHeight() int32 {
return s.bestBlockHeight.Load()
}
func (s *State) BestBlockHash() *chainhash.Hash {
o := s.bestBlockHash.Load()
return &o
}
func (s *State) Balance() float64 {
return s.balance.Load()
}
func (s *State) BalanceUnconfirmed() float64 {
return s.balanceUnconfirmed.Load()
}
func (s *State) ActivePage() string {
return s.activePage.Load()
}
func (s *State) SetActivePage(page string) {
s.activePage.Store(page)
}
func (s *State) SetBestBlockHeight(height int32) {
s.BumpLastUpdated()
s.bestBlockHeight.Store(height)
}
func (s *State) SetBestBlockHash(h *chainhash.Hash) {
s.BumpLastUpdated()
s.bestBlockHash.Store(*h)
}
func (s *State) SetBalance(total float64) {
s.BumpLastUpdated()
s.balance.Store(total)
}
func (s *State) SetBalanceUnconfirmed(unconfirmed float64) {
s.BumpLastUpdated()
s.balanceUnconfirmed.Store(unconfirmed)
}

529
cmd/gui/walletunlock.go Normal file
View File

@@ -0,0 +1,529 @@
package gui
import (
"encoding/hex"
"encoding/json"
"fmt"
"io/ioutil"
"path/filepath"
"time"
"golang.org/x/exp/shiny/materialdesign/icons"
"lukechampine.com/blake3"
l "github.com/p9c/p9/pkg/gel/gio/layout"
"github.com/p9c/p9/pkg/gel/gio/text"
"github.com/p9c/p9/pkg/interrupt"
"github.com/p9c/p9/pkg/gel"
"github.com/p9c/p9/pkg/p9icons"
)
func (wg *WalletGUI) unlockWallet(pass string) {
D.Ln("entered password", pass)
// unlock wallet
// wg.cx.Config.Lock()
wg.cx.Config.WalletPass.Set(pass)
wg.cx.Config.WalletOff.F()
// wg.cx.Config.Unlock()
// load config into a fresh variable
// cfg := podcfgs.GetDefaultConfig()
var cfgFile []byte
var e error
if cfgFile, e = ioutil.ReadFile(wg.cx.Config.ConfigFile.V()); E.Chk(e) {
// this should not happen
// TODO: panic-type conditions - for gel should have a notification maybe?
panic("config file does not exist")
}
cfg := wg.cx.Config
D.Ln("loaded config")
if e = json.Unmarshal(cfgFile, &cfg); !E.Chk(e) {
D.Ln("unmarshaled config")
bhb := blake3.Sum256([]byte(pass))
bh := hex.EncodeToString(bhb[:])
I.Ln(pass, bh, cfg.WalletPass.V())
if cfg.WalletPass.V() == bh {
D.Ln("loading previously saved state")
filename := filepath.Join(wg.cx.Config.DataDir.V(), "state.json")
// if log.FileExists(filename) {
I.Ln("#### loading state data...")
if e = wg.State.Load(filename, wg.cx.Config.WalletPass.Bytes()); !E.Chk(e) {
D.Ln("#### loaded state data")
}
// it is as though it is loaded if it didn't exist
wg.stateLoaded.Store(true)
// the entered password matches the stored hash
wg.cx.Config.NodeOff.F()
wg.cx.Config.WalletOff.F()
wg.cx.Config.WalletPass.Set(pass)
if e = wg.cx.Config.WriteToFile(wg.cx.Config.ConfigFile.V()); E.Chk(e) {
}
wg.cx.Config.WalletPass.Set(pass)
wg.WalletWatcher = wg.Watcher()
// }
//
// qrText := fmt.Sprintf("parallelcoin:%s?amount=%s&message=%s",
// wg.State.currentReceivingAddress.Load().EncodeAddress(),
// wg.inputs["receiveAmount"].GetText(),
// wg.inputs["receiveMessage"].GetText(),
// )
// var qrc image.Image
// if qrc, e = qrcode.Encode(qrText, 0, qrcode.ECLevelL, 4); !E.Chk(e) {
// iop := paint.NewImageOp(qrc)
// wg.currentReceiveQRCode = &iop
// wg.currentReceiveQR = wg.ButtonLayout(wg.currentReceiveCopyClickable.SetClick(func() {
// D.Ln("clicked qr code copy clicker")
// if e := clipboard.WriteAll(qrText); E.Chk(e) {
// }
// })).
// // CornerRadius(0.5).
// // Corners(gel.NW | gel.SW | gel.NE).
// Background("white").
// Embed(
// wg.Inset(0.125,
// wg.Image().Src(*wg.currentReceiveQRCode).Scale(1).Fn,
// ).Fn,
// ).Fn
// // *wg.currentReceiveQRCode = iop
// }
}
} else {
D.Ln("failed to unlock the wallet")
}
}
func (wg *WalletGUI) getWalletUnlockAppWidget() (a *gel.App) {
a = wg.App(wg.Window.Width, wg.State.activePage, Break1).
SetMainDirection(l.Center + 1).
SetLogo(&p9icons.ParallelCoin).
SetAppTitleText("Parallelcoin Wallet")
wg.unlockPage = a
password := wg.cx.Config.WalletPass
exitButton := wg.WidgetPool.GetClickable()
unlockButton := wg.WidgetPool.GetClickable()
wg.unlockPassword = wg.Password(
"enter password", password, "DocText",
"DocBg", "PanelBg", func(pass string) {
I.Ln("wallet unlock initiated", pass)
wg.unlockWallet(pass)
},
)
// wg.unlockPage.SetThemeHook(
// func() {
// D.Ln("theme hook")
// // D.Ln(wg.bools)
// wg.cx.Config.DarkTheme.Set(*wg.Dark)
// b := wg.configs["config"]["DarkTheme"].Slot.(*bool)
// *b = *wg.Dark
// if wgb, ok := wg.config.Bools["DarkTheme"]; ok {
// wgb.Value(*wg.Dark)
// }
// var e error
// if e = wg.cx.Config.WriteToFile(wg.cx.Config.ConfigFile.V()); E.Chk(e) {
// }
// },
// )
a.Pages(
map[string]l.Widget{
"home": wg.Page(
"home", gel.Widgets{
gel.WidgetSize{
Widget:
func(gtx l.Context) l.Dimensions {
var dims l.Dimensions
return wg.Flex().
AlignMiddle().
Flexed(
1,
wg.VFlex().
Flexed(0.5, gel.EmptyMaxHeight()).
Rigid(
wg.Flex().
SpaceEvenly().
AlignMiddle().
Rigid(
wg.Fill(
"DocBg", l.Center, wg.TextSize.V, 0,
wg.Inset(
0.5,
wg.Flex().
AlignMiddle().
Rigid(
wg.VFlex().
AlignMiddle().
Rigid(
func(gtx l.Context) l.Dimensions {
dims = wg.Flex().
AlignBaseline().
Rigid(
wg.Fill(
"Fatal",
l.Center,
wg.TextSize.V/2,
0,
wg.Inset(
0.5,
wg.Icon().
Scale(gel.Scales["H3"]).
Color("DocBg").
Src(&icons.ActionLock).Fn,
).Fn,
).Fn,
).
Rigid(
wg.Inset(
0.5,
gel.EmptySpace(0, 0),
).Fn,
).
Rigid(
wg.H2("locked").Color("DocText").Fn,
).
Fn(gtx)
return dims
},
).
Rigid(wg.Inset(0.5, gel.EmptySpace(0, 0)).Fn).
Rigid(
func(gtx l.Context) l.
Dimensions {
gtx.Constraints.Max.
X = dims.Size.X
return wg.
unlockPassword.
Fn(gtx)
},
).
Rigid(wg.Inset(0.5, gel.EmptySpace(0, 0)).Fn).
Rigid(
wg.Body1(
fmt.Sprintf(
"%v idle timeout",
time.Duration(wg.incdecs["idleTimeout"].GetCurrent())*time.Second,
),
).
Color("DocText").
Font("bariol bold").
Fn,
).
Rigid(
wg.Flex().
Rigid(
wg.Body1("Idle timeout in seconds:").Color(
"DocText",
).Fn,
).
Rigid(
wg.incdecs["idleTimeout"].
Color("DocText").
Background("DocBg").
Scale(gel.Scales["Caption"]).
Fn,
).
Fn,
).
Rigid(
wg.Flex().
Rigid(
wg.Inset(
0.25,
wg.ButtonLayout(
exitButton.SetClick(
func() {
interrupt.Request()
},
),
).
CornerRadius(0.5).
Corners(0).
Background("PanelBg").
Embed(
// wg.Fill("DocText",
wg.Inset(
0.25,
wg.Flex().AlignMiddle().
Rigid(
wg.Icon().
Scale(
gel.Scales["H4"],
).
Color("DocText").
Src(
&icons.
MapsDirectionsRun,
).Fn,
).
Rigid(
wg.Inset(
0.5,
gel.EmptySpace(
0,
0,
),
).Fn,
).
Rigid(
wg.H6("exit").Color("DocText").Fn,
).
Rigid(
wg.Inset(
0.5,
gel.EmptySpace(
0,
0,
),
).Fn,
).
Fn,
).Fn,
// l.Center,
// wg.TextSize.True/2).Fn,
).Fn,
).Fn,
).
Rigid(
wg.Inset(
0.25,
wg.ButtonLayout(
unlockButton.SetClick(
func() {
// pass := wg.unlockPassword.Editor().Text()
pass := wg.unlockPassword.GetPassword()
D.Ln(
">>>>>>>>>>> unlock password",
pass,
)
wg.unlockWallet(pass)
},
),
).Background("Primary").
CornerRadius(0.5).
Corners(0).
Embed(
wg.Inset(
0.25,
wg.Flex().AlignMiddle().
Rigid(
wg.Icon().
Scale(gel.Scales["H4"]).
Color("Light").
Src(&icons.ActionLockOpen).Fn,
).
Rigid(
wg.Inset(
0.5,
gel.EmptySpace(
0,
0,
),
).Fn,
).
Rigid(
wg.H6("unlock").Color("Light").Fn,
).
Rigid(
wg.Inset(
0.5,
gel.EmptySpace(
0,
0,
),
).Fn,
).
Fn,
).Fn,
).Fn,
).Fn,
).
Fn,
).
Fn,
).
Fn,
).Fn,
).Fn,
).
Fn,
).Flexed(0.5, gel.EmptyMaxHeight()).Fn,
).
Fn(gtx)
},
},
},
),
"settings": wg.Page(
"settings", gel.Widgets{
gel.WidgetSize{
Widget: func(gtx l.Context) l.Dimensions {
return wg.configs.Widget(wg.config)(gtx)
},
},
},
),
"console": wg.Page(
"console", gel.Widgets{
gel.WidgetSize{Widget: wg.console.Fn},
},
),
"help": wg.Page(
"help", gel.Widgets{
gel.WidgetSize{Widget: wg.HelpPage()},
},
),
"log": wg.Page(
"log", gel.Widgets{
gel.WidgetSize{Widget: a.Placeholder("log")},
},
),
"quit": wg.Page(
"quit", gel.Widgets{
gel.WidgetSize{
Widget: func(gtx l.Context) l.Dimensions {
return wg.VFlex().
SpaceEvenly().
AlignMiddle().
Rigid(
wg.H4("are you sure?").Color(wg.unlockPage.BodyColorGet()).Alignment(text.Middle).Fn,
).
Rigid(
wg.Flex().
// SpaceEvenly().
Flexed(0.5, gel.EmptyMaxWidth()).
Rigid(
wg.Button(
wg.clickables["quit"].SetClick(
func() {
wg.gracefulShutdown()
// close(wg.quit)
},
),
).Color("Light").TextScale(5).Text(
"yes!!!",
).Fn,
).
Flexed(0.5, gel.EmptyMaxWidth()).
Fn,
).
Fn(gtx)
},
},
},
),
// "goroutines": wg.Page(
// "log", p9.Widgets{
// // p9.WidgetSize{Widget: p9.EmptyMaxHeight()},
//
// p9.WidgetSize{
// Widget: func(gtx l.Context) l.Dimensions {
// le := func(gtx l.Context, index int) l.Dimensions {
// return wg.State.goroutines[index](gtx)
// }
// return func(gtx l.Context) l.Dimensions {
// return wg.ButtonInset(
// 0.25,
// wg.Fill(
// "DocBg",
// wg.lists["recent"].
// Vertical().
// // Background("DocBg").Color("DocText").Active("Primary").
// Length(len(wg.State.goroutines)).
// ListElement(le).
// Fn,
// ).Fn,
// ).
// Fn(gtx)
// }(gtx)
// // wg.NodeRunCommandChan <- "stop"
// // consume.Kill(wg.Worker)
// // consume.Kill(wg.cx.StateCfg.Miner)
// // close(wg.cx.NodeKill)
// // close(wg.cx.KillAll)
// // time.Sleep(time.Second*3)
// // interrupt.Request()
// // os.Exit(0)
// // return l.Dimensions{}
// },
// },
// },
// ),
"mining": wg.Page(
"mining", gel.Widgets{
gel.WidgetSize{Widget: a.Placeholder("mining")},
},
),
"explorer": wg.Page(
"explorer", gel.Widgets{
gel.WidgetSize{Widget: a.Placeholder("explorer")},
},
),
},
)
// a.SideBar([]l.Widget{
// wg.SideBarButton("overview", "overview", 0),
// wg.SideBarButton("send", "send", 1),
// wg.SideBarButton("receive", "receive", 2),
// wg.SideBarButton("history", "history", 3),
// wg.SideBarButton("explorer", "explorer", 6),
// wg.SideBarButton("mining", "mining", 7),
// wg.SideBarButton("console", "console", 9),
// wg.SideBarButton("settings", "settings", 5),
// wg.SideBarButton("log", "log", 10),
// wg.SideBarButton("help", "help", 8),
// wg.SideBarButton("quit", "quit", 11),
// })
a.ButtonBar(
[]l.Widget{
// wg.PageTopBarButton(
// "goroutines", 0, &icons.ActionBugReport, func(name string) {
// wg.unlockPage.ActivePage(name)
// }, wg.unlockPage, "",
// ),
wg.PageTopBarButton(
"help", 1, &icons.ActionHelp, func(name string) {
wg.unlockPage.ActivePage(name)
}, wg.unlockPage, "",
),
wg.PageTopBarButton(
"home", 4, &icons.ActionLock, func(name string) {
wg.unlockPage.ActivePage(name)
}, wg.unlockPage, "Danger",
),
// wg.Flex().Rigid(wg.Inset(0.5, gel.EmptySpace(0, 0)).Fn).Fn,
// wg.PageTopBarButton(
// "quit", 3, &icons.ActionExitToApp, func(name string) {
// wg.unlockPage.ActivePage(name)
// }, wg.unlockPage, "",
// ),
},
)
a.StatusBar(
[]l.Widget{
// wg.Inset(0.5, gel.EmptySpace(0, 0)).Fn,
wg.RunStatusPanel,
},
[]l.Widget{
wg.StatusBarButton(
"console", 3, &p9icons.Terminal, func(name string) {
wg.MainApp.ActivePage(name)
}, a,
),
wg.StatusBarButton(
"log", 4, &icons.ActionList, func(name string) {
D.Ln("click on button", name)
wg.unlockPage.ActivePage(name)
}, wg.unlockPage,
),
wg.StatusBarButton(
"settings", 5, &icons.ActionSettings, func(name string) {
wg.unlockPage.ActivePage(name)
}, wg.unlockPage,
),
// wg.Inset(0.5, gel.EmptySpace(0, 0)).Fn,
},
)
// a.PushOverlay(wg.toasts.DrawToasts())
// a.PushOverlay(wg.dialog.DrawDialog())
return
}

158
cmd/gui/watcher.go Normal file
View File

@@ -0,0 +1,158 @@
package gui
import (
"time"
"github.com/p9c/p9/pkg/btcjson"
"github.com/p9c/p9/pkg/qu"
)
// Watcher keeps the chain and wallet and rpc clients connected
func (wg *WalletGUI) Watcher() qu.C {
var e error
I.Ln("starting up watcher")
quit := qu.T()
// start things up first
if !wg.node.Running() {
D.Ln("watcher starting node")
wg.node.Start()
}
if wg.ChainClient == nil {
D.Ln("chain client is not initialized")
var e error
if e = wg.chainClient(); E.Chk(e) {
}
}
if !wg.wallet.Running() {
D.Ln("watcher starting wallet")
wg.wallet.Start()
D.Ln("now we can open the wallet")
if e = wg.writeWalletCookie(); E.Chk(e) {
}
}
if wg.WalletClient == nil || wg.WalletClient.Disconnected() {
allOut:
for {
if e = wg.walletClient(); !E.Chk(e) {
out:
for {
// keep trying until shutdown or the wallet client connects
I.Ln("attempting to get blockchain info from wallet")
var bci *btcjson.GetBlockChainInfoResult
if bci, e = wg.WalletClient.GetBlockChainInfo(); E.Chk(e) {
select {
case <-time.After(time.Second):
continue
case <-wg.quit:
return nil
}
}
D.S(bci)
break out
}
}
wg.unlockPassword.Wipe()
select {
case <-time.After(time.Second):
break allOut
case <-wg.quit:
return nil
}
}
}
go func() {
watchTick := time.NewTicker(time.Second)
var e error
totalOut:
for {
disconnected:
for {
D.Ln("top of watcher loop")
select {
case <-watchTick.C:
if e = wg.Advertise(); E.Chk(e) {
}
if !wg.node.Running() {
D.Ln("watcher starting node")
wg.node.Start()
}
if wg.ChainClient.Disconnected() {
if e = wg.chainClient(); E.Chk(e) {
continue
}
}
if !wg.wallet.Running() {
D.Ln("watcher starting wallet")
wg.wallet.Start()
}
if wg.WalletClient == nil {
D.Ln("wallet client is not initialized")
if e = wg.walletClient(); E.Chk(e) {
continue
// } else {
// break disconnected
}
}
if wg.WalletClient.Disconnected() {
if e = wg.WalletClient.Connect(1); D.Chk(e) {
continue
// } else {
// break disconnected
}
} else {
D.Ln(
"chain, chainclient, wallet and client are now connected",
wg.node.Running(),
!wg.ChainClient.Disconnected(),
wg.wallet.Running(),
!wg.WalletClient.Disconnected(),
)
wg.updateChainBlock()
wg.processWalletBlockNotification()
break disconnected
}
case <-quit.Wait():
break totalOut
case <-wg.quit.Wait():
break totalOut
}
}
if wg.cx.Config.Controller.True() {
if wg.ChainClient != nil {
if e = wg.ChainClient.SetGenerate(
wg.cx.Config.Controller.True(),
wg.cx.Config.GenThreads.V(),
); !E.Chk(e) {
}
}
}
connected:
for {
select {
case <-watchTick.C:
if !wg.wallet.Running() {
D.Ln(">>>>>>>>>>>>>>>>>>>>>>>>>>>>> wallet not running, breaking out")
break connected
}
if wg.WalletClient == nil || wg.WalletClient.Disconnected() {
D.Ln(">>>>>>>>>>>>>>>>>>>>>>>>>>>>> wallet client disconnected, breaking out")
break connected
}
case <-quit.Wait():
break totalOut
case <-wg.quit.Wait():
break totalOut
}
}
}
D.Ln("shutting down watcher")
if wg.WalletClient != nil {
wg.WalletClient.Disconnect()
wg.WalletClient.Shutdown()
}
wg.wallet.Stop()
}()
return quit
}

43
cmd/kopach/client/log.go Normal file
View File

@@ -0,0 +1,43 @@
package client
import (
"github.com/p9c/p9/pkg/log"
"github.com/p9c/p9/version"
)
var subsystem = log.AddLoggerSubsystem(version.PathBase)
var F, E, W, I, D, T log.LevelPrinter = log.GetLogPrinterSet(subsystem)
func init() {
// to filter out this package, uncomment the following
// var _ = logg.AddFilteredSubsystem(subsystem)
// to highlight this package, uncomment the following
// var _ = logg.AddHighlightedSubsystem(subsystem)
// these are here to test whether they are working
// F.Ln("F.Ln")
// E.Ln("E.Ln")
// W.Ln("W.Ln")
// I.Ln("I.Ln")
// D.Ln("D.Ln")
// F.Ln("T.Ln")
// F.F("%s", "F.F")
// E.F("%s", "E.F")
// W.F("%s", "W.F")
// I.F("%s", "I.F")
// D.F("%s", "D.F")
// T.F("%s", "T.F")
// F.C(func() string { return "F.C" })
// E.C(func() string { return "E.C" })
// W.C(func() string { return "W.C" })
// I.C(func() string { return "I.C" })
// D.C(func() string { return "D.C" })
// T.C(func() string { return "T.C" })
// F.C(func() string { return "F.C" })
// E.Chk(errors.New("E.Chk"))
// W.Chk(errors.New("W.Chk"))
// I.Chk(errors.New("I.Chk"))
// D.Chk(errors.New("D.Chk"))
// T.Chk(errors.New("T.Chk"))
}

View File

@@ -0,0 +1,83 @@
package client
import (
"errors"
"io"
"net/rpc"
"github.com/p9c/p9/pkg/chainrpc/templates"
)
type Client struct {
*rpc.Client
}
// New creates a new client for a kopach_worker. Note that any kind of connection can be used here, other than the
// StdConn
func New(conn io.ReadWriteCloser) *Client {
return &Client{rpc.NewClient(conn)}
}
// NewJob is a delivery of a new job for the worker, this starts a miner
// note that since this implements net/rpc by default this is gob encoded
func (c *Client) NewJob(templates *templates.Message) (e error) {
// T.Ln("sending new templates")
// D.S(templates)
if templates == nil {
e = errors.New("templates is nil")
return
}
var reply bool
if e = c.Call("Worker.NewJob", templates, &reply); E.Chk(e) {
return
}
if reply != true {
e = errors.New("new templates command not acknowledged")
}
D.Ln("new job delivered to workers")
return
}
// Pause tells the worker to stop working, this is for when the controlling node
// is not current
func (c *Client) Pause() (e error) {
// D.Ln("sending pause")
var reply bool
e = c.Call("Worker.Pause", 1, &reply)
if e != nil {
return
}
if reply != true {
e = errors.New("pause command not acknowledged")
}
return
}
// Stop the workers
func (c *Client) Stop() (e error) {
D.Ln("stop working (exit)")
var reply bool
e = c.Call("Worker.Stop", 1, &reply)
if e != nil {
return
}
if reply != true {
e = errors.New("stop command not acknowledged")
}
return
}
// SendPass sends the multicast PSK to the workers so they can dispatch their
// solutions
func (c *Client) SendPass(pass []byte) (e error) {
D.Ln("sending dispatch password")
var reply bool
e = c.Call("Worker.SendPass", pass, &reply)
if e != nil {
return
}
if reply != true {
e = errors.New("send pass command not acknowledged")
}
return
}

414
cmd/kopach/kopach.go Normal file
View File

@@ -0,0 +1,414 @@
package kopach
import (
"context"
"crypto/rand"
"fmt"
"net"
"os"
"runtime"
"time"
"github.com/niubaoshu/gotiny"
"github.com/p9c/p9/pkg/log"
"github.com/p9c/p9/pkg/chainrpc/p2padvt"
"github.com/p9c/p9/pkg/chainrpc/templates"
"github.com/p9c/p9/pkg/constant"
"github.com/p9c/p9/pkg/pipe"
"github.com/p9c/p9/pod/state"
"github.com/p9c/p9/pkg/qu"
"github.com/VividCortex/ewma"
"go.uber.org/atomic"
"github.com/p9c/p9/pkg/interrupt"
"github.com/p9c/p9/cmd/kopach/client"
"github.com/p9c/p9/pkg/chainhash"
"github.com/p9c/p9/pkg/chainrpc/hashrate"
"github.com/p9c/p9/pkg/chainrpc/job"
"github.com/p9c/p9/pkg/chainrpc/pause"
rav "github.com/p9c/p9/pkg/ring"
"github.com/p9c/p9/pkg/transport"
)
var maxThreads = float32(runtime.NumCPU())
type HashCount struct {
uint64
Time time.Time
}
type SolutionData struct {
time time.Time
height int
algo string
hash string
indexHash string
version int32
prevBlock string
merkleRoot string
timestamp time.Time
bits uint32
nonce uint32
}
type Worker struct {
id string
cx *state.State
height int32
active atomic.Bool
conn *transport.Channel
ctx context.Context
quit qu.C
sendAddresses []*net.UDPAddr
clients []*client.Client
workers []*pipe.Worker
FirstSender atomic.Uint64
lastSent atomic.Int64
Status atomic.String
HashTick chan HashCount
LastHash *chainhash.Hash
StartChan, StopChan qu.C
// SetThreads chan int
PassChan chan string
solutions []SolutionData
solutionCount int
Update qu.C
hashCount atomic.Uint64
hashSampleBuf *rav.BufferUint64
hashrate float64
lastNonce uint64
}
func (w *Worker) Start() {
D.Ln("starting up kopach workers")
w.workers = []*pipe.Worker{}
w.clients = []*client.Client{}
for i := 0; i < w.cx.Config.GenThreads.V(); i++ {
D.Ln("starting worker", i)
cmd, _ := pipe.Spawn(w.quit, os.Args[0], "worker", w.id, w.cx.ActiveNet.Name, w.cx.Config.LogLevel.V())
w.workers = append(w.workers, cmd)
w.clients = append(w.clients, client.New(cmd.StdConn))
}
for i := range w.clients {
T.Ln("sending pass to worker", i)
e := w.clients[i].SendPass(w.cx.Config.MulticastPass.Bytes())
if e != nil {
}
}
D.Ln("setting workers to active")
w.active.Store(true)
}
func (w *Worker) Stop() {
var e error
for i := range w.clients {
if e = w.clients[i].Pause(); E.Chk(e) {
}
if e = w.clients[i].Stop(); E.Chk(e) {
}
if e = w.clients[i].Close(); E.Chk(e) {
}
}
for i := range w.workers {
// if e = w.workers[i].Interrupt(); !E.Chk(e) {
// }
if e = w.workers[i].Kill(); !E.Chk(e) {
}
D.Ln("stopped worker", i)
}
w.active.Store(false)
w.quit.Q()
}
// Run the miner
func Run(cx *state.State) (e error) {
D.Ln("miner starting")
randomBytes := make([]byte, 4)
if _, e = rand.Read(randomBytes); E.Chk(e) {
}
w := &Worker{
id: fmt.Sprintf("%x", randomBytes),
cx: cx,
quit: cx.KillAll,
sendAddresses: []*net.UDPAddr{},
StartChan: qu.T(),
StopChan: qu.T(),
// SetThreads: make(chan int),
solutions: make([]SolutionData, 0, 2048),
Update: qu.T(),
hashSampleBuf: rav.NewBufferUint64(1000),
}
w.lastSent.Store(time.Now().UnixNano())
w.active.Store(false)
D.Ln("opening broadcast channel listener")
w.conn, e = transport.NewBroadcastChannel(
"kopachmain", w, cx.Config.MulticastPass.Bytes(),
transport.DefaultPort, constant.MaxDatagramSize, handlers,
w.quit,
)
if e != nil {
return
}
// start up the workers
// if cx.Config.Generate.True() {
I.Ln("starting up miner workers")
w.Start()
interrupt.AddHandler(
func() {
w.Stop()
},
)
// }
// controller watcher thread
go func() {
D.Ln("starting controller watcher")
ticker := time.NewTicker(time.Second)
logger := time.NewTicker(time.Second)
out:
for {
select {
case <-ticker.C:
W.Ln("controller watcher ticker")
// if the last message sent was 3 seconds ago the server is almost certainly disconnected or crashed
// so clear FirstSender
since := time.Now().Sub(time.Unix(0, w.lastSent.Load()))
wasSending := since > time.Second*6 && w.FirstSender.Load() != 0
if wasSending {
D.Ln("previous current controller has stopped broadcasting", since, w.FirstSender.Load())
// when this string is clear other broadcasts will be listened to
w.FirstSender.Store(0)
// pause the workers
for i := range w.clients {
D.Ln("sending pause to worker", i)
e := w.clients[i].Pause()
if e != nil {
}
}
}
// if interrupt.Requested() {
// w.StopChan <- struct{}{}
// w.quit.Q()
// }
case <-logger.C:
W.Ln("hash report ticker")
w.hashrate = w.HashReport()
// if interrupt.Requested() {
// w.StopChan <- struct{}{}
// w.quit.Q()
// }
case <-w.StartChan.Wait():
D.Ln("received signal on StartChan")
cx.Config.Generate.T()
// if e = cx.Config.WriteToFile(cx.Config.ConfigFile.V()); E.Chk(e) {
// }
w.Start()
case <-w.StopChan.Wait():
D.Ln("received signal on StopChan")
cx.Config.Generate.F()
// if e = cx.Config.WriteToFile(cx.Config.ConfigFile.V()); E.Chk(e) {
// }
w.Stop()
case s := <-w.PassChan:
F.Ln("received signal on PassChan", s)
cx.Config.MulticastPass.Set(s)
// if e = cx.Config.WriteToFile(cx.Config.ConfigFile.V()); E.Chk(e) {
// }
w.Stop()
w.Start()
// case n := <-w.SetThreads:
// D.Ln("received signal on SetThreads", n)
// cx.Config.GenThreads.Set(n)
// // if e = cx.Config.WriteToFile(cx.Config.ConfigFile.V()); E.Chk(e) {
// // }
// if cx.Config.Generate.True() {
// // always sanitise
// if n < 0 {
// n = int(maxThreads)
// }
// if n > int(maxThreads) {
// n = int(maxThreads)
// }
// w.Stop()
// w.Start()
// }
case <-w.quit.Wait():
D.Ln("stopping from quit")
interrupt.Request()
break out
}
}
D.Ln("finished kopach miner work loop")
log.LogChanDisabled.Store(true)
}()
D.Ln("listening on", constant.UDP4MulticastAddress)
<-w.quit
I.Ln("kopach shutting down") // , interrupt.GoroutineDump())
// <-interrupt.HandlersDone
I.Ln("kopach finished shutdown")
return
}
// these are the handlers for specific message types.
var handlers = transport.Handlers{
string(hashrate.Magic): func(
ctx interface{}, src net.Addr, dst string, b []byte,
) (e error) {
c := ctx.(*Worker)
if !c.active.Load() {
D.Ln("not active")
return
}
var hr hashrate.Hashrate
gotiny.Unmarshal(b, &hr)
// if this is not one of our workers reports ignore it
if hr.ID != c.id {
return
}
count := hr.Count
hc := c.hashCount.Load() + uint64(count)
c.hashCount.Store(hc)
return
},
string(job.Magic): func(
ctx interface{}, src net.Addr, dst string,
b []byte,
) (e error) {
w := ctx.(*Worker)
if !w.active.Load() {
T.Ln("not active")
return
}
jr := templates.Message{}
gotiny.Unmarshal(b, &jr)
w.height = jr.Height
cN := jr.UUID
firstSender := w.FirstSender.Load()
otherSent := firstSender != cN && firstSender != 0
if otherSent {
T.Ln("ignoring other controller job", jr.Nonce, jr.UUID)
// ignore other controllers while one is active and received first
return
}
// if jr.Nonce == w.lastNonce {
// I.Ln("same job again, ignoring (NOT)")
// // return
// }
// w.lastNonce = jr.Nonce
// w.FirstSender.Store(cN)
T.Ln("received job, starting workers on it", jr.Nonce, jr.UUID)
w.lastSent.Store(time.Now().UnixNano())
for i := range w.clients {
if e = w.clients[i].NewJob(&jr); E.Chk(e) {
}
}
return
},
string(pause.Magic): func(
ctx interface{}, src net.Addr, dst string, b []byte,
) (e error) {
w := ctx.(*Worker)
var advt p2padvt.Advertisment
gotiny.Unmarshal(b, &advt)
// p := pause.LoadPauseContainer(b)
fs := w.FirstSender.Load()
ni := advt.IPs
// ni := p.GetIPs()[0].String()
np := advt.UUID
// np := p.GetControllerListenerPort()
// ns := net.JoinHostPort(strings.Split(ni.String(), ":")[0], fmt.Sprint(np))
D.Ln("received pause from server at", ni, np, "stopping", len(w.clients), "workers stopping")
if fs == np {
for i := range w.clients {
// D.Ln("sending pause to worker", i, fs, np)
e := w.clients[i].Pause()
if e != nil {
}
}
}
w.FirstSender.Store(0)
return
},
// string(sol.Magic): func(
// ctx interface{}, src net.Addr, dst string,
// b []byte,
// ) (e error) {
// // w := ctx.(*Worker)
// // I.Ln("shuffling work due to solution on network")
// // w.FirstSender.Store(0)
// // D.Ln("solution detected from miner at", src)
// // portSlice := strings.Split(w.FirstSender.Load(), ":")
// // if len(portSlice) < 2 {
// // D.Ln("error with solution", w.FirstSender.Load(), portSlice)
// // return
// // }
// // // port := portSlice[1]
// // // j := sol.LoadSolContainer(b)
// // // senderPort := j.GetSenderPort()
// // // if fmt.Sprint(senderPort) == port {
// // // // W.Ln("we found a solution")
// // // // prepend to list of solutions for GUI display if enabled
// // // if *w.cx.Config.KopachGUI {
// // // // D.Ln("length solutions", len(w.solutions))
// // // blok := j.GetMsgBlock()
// // // w.solutions = append(
// // // w.solutions, []SolutionData{
// // // {
// // // time: time.Now(),
// // // height: int(w.height),
// // // algo: fmt.Sprint(
// // // fork.GetAlgoName(blok.Header.Version, w.height),
// // // ),
// // // hash: blok.Header.BlockHashWithAlgos(w.height).String(),
// // // indexHash: blok.Header.BlockHash().String(),
// // // version: blok.Header.Version,
// // // prevBlock: blok.Header.PrevBlock.String(),
// // // merkleRoot: blok.Header.MerkleRoot.String(),
// // // timestamp: blok.Header.Timestamp,
// // // bits: blok.Header.Bits,
// // // nonce: blok.Header.Nonce,
// // // },
// // // }...,
// // // )
// // // if len(w.solutions) > 2047 {
// // // w.solutions = w.solutions[len(w.solutions)-2047:]
// // // }
// // // w.solutionCount = len(w.solutions)
// // // w.Update <- struct{}{}
// // // }
// // // }
// // // D.Ln("no longer listening to", w.FirstSender.Load())
// // // w.FirstSender.Store("")
// return
// },
}
func (w *Worker) HashReport() float64 {
W.Ln("generating hash report")
w.hashSampleBuf.Add(w.hashCount.Load())
av := ewma.NewMovingAverage()
var i int
var prev uint64
if e := w.hashSampleBuf.ForEach(
func(v uint64) (e error) {
if i < 1 {
prev = v
} else {
interval := v - prev
av.Add(float64(interval))
prev = v
}
i++
return nil
},
); E.Chk(e) {
}
average := av.Value()
W.Ln("hashrate average", average)
// panic("aaargh")
return average
}

9
cmd/kopach/log.go Normal file
View File

@@ -0,0 +1,9 @@
package kopach
import (
"github.com/p9c/p9/pkg/log"
"github.com/p9c/p9/version"
)
var subsystem = log.AddLoggerSubsystem(version.PathBase)
var F, E, W, I, D, T log.LevelPrinter = log.GetLogPrinterSet(subsystem)

View File

@@ -0,0 +1,346 @@
package worker
import (
"crypto/cipher"
"math/rand"
"net"
"os"
"sync"
"time"
"github.com/p9c/p9/pkg/bits"
"github.com/p9c/p9/pkg/chainrpc/templates"
"github.com/p9c/p9/pkg/constant"
"github.com/p9c/p9/pkg/fork"
"github.com/p9c/p9/pkg/pipe"
"github.com/p9c/p9/pkg/qu"
"github.com/p9c/p9/pkg/blockchain"
"github.com/p9c/p9/pkg/chainrpc/hashrate"
"github.com/p9c/p9/pkg/chainrpc/sol"
"go.uber.org/atomic"
"github.com/p9c/p9/pkg/interrupt"
"github.com/p9c/p9/pkg/ring"
"github.com/p9c/p9/pkg/transport"
)
const CountPerRound = 81
type Worker struct {
mx sync.Mutex
id string
pipeConn *pipe.StdConn
dispatchConn *transport.Channel
dispatchReady atomic.Bool
ciph cipher.AEAD
quit qu.C
templatesMessage *templates.Message
uuid atomic.Uint64
roller *Counter
startNonce uint32
startChan qu.C
stopChan qu.C
running atomic.Bool
hashCount atomic.Uint64
hashSampleBuf *ring.BufferUint64
}
type Counter struct {
rpa int32
C atomic.Int32
Algos atomic.Value // []int32
RoundsPerAlgo atomic.Int32
}
// NewCounter returns an initialized algorithm rolling counter that ensures each
// miner does equal amounts of every algorithm
func NewCounter(countPerRound int32) (c *Counter) {
// these will be populated when work arrives
var algos []int32
// Start the counter at a random position
rand.Seed(time.Now().UnixNano())
c = &Counter{}
c.C.Store(int32(rand.Intn(int(countPerRound)+1) + 1))
c.Algos.Store(algos)
c.RoundsPerAlgo.Store(countPerRound)
c.rpa = countPerRound
return
}
// GetAlgoVer returns the next algo version based on the current configuration
func (c *Counter) GetAlgoVer(height int32) (ver int32) {
// the formula below rolls through versions with blocks roundsPerAlgo long for each algorithm by its index
algs := fork.GetAlgoVerSlice(height)
// D.Ln(algs)
if c.RoundsPerAlgo.Load() < 1 {
D.Ln("CountPerRound is", c.RoundsPerAlgo.Load(), len(algs))
return 0
}
if len(algs) > 0 {
ver = algs[c.C.Load()%int32(len(algs))]
// ver = algs[(c.C.Load()/
// c.CountPerRound.Load())%
// int32(len(algs))]
c.C.Add(1)
}
return
}
//
// func (w *Worker) hashReport() {
// w.hashSampleBuf.Add(w.hashCount.Load())
// av := ewma.NewMovingAverage(15)
// var i int
// var prev uint64
// if e := w.hashSampleBuf.ForEach(
// func(v uint64) (e error) {
// if i < 1 {
// prev = v
// } else {
// interval := v - prev
// av.Add(float64(interval))
// prev = v
// }
// i++
// return nil
// },
// ); E.Chk(e) {
// }
// // I.Ln("kopach",w.hashSampleBuf.Cursor, w.hashSampleBuf.Buf)
// Tracef("average hashrate %.2f", av.Value())
// }
// NewWithConnAndSemaphore is exposed to enable use an actual network connection while retaining the same RPC API to
// allow a worker to be configured to run on a bare metal system with a different launcher main
func NewWithConnAndSemaphore(id string, conn *pipe.StdConn, quit qu.C, uuid uint64) *Worker {
T.Ln("creating new worker")
// msgBlock := wire.WireBlock{Header: wire.BlockHeader{}}
w := &Worker{
id: id,
pipeConn: conn,
quit: quit,
roller: NewCounter(CountPerRound),
startChan: qu.T(),
stopChan: qu.T(),
hashSampleBuf: ring.NewBufferUint64(1000),
}
w.uuid.Store(uuid)
w.dispatchReady.Store(false)
// with this we can report cumulative hash counts as well as using it to distribute algorithms evenly
w.startNonce = uint32(w.roller.C.Load())
interrupt.AddHandler(
func() {
D.Ln("worker", id, "quitting")
w.stopChan <- struct{}{}
// _ = w.pipeConn.Close()
w.dispatchReady.Store(false)
},
)
go worker(w)
return w
}
func worker(w *Worker) {
D.Ln("main work loop starting")
// sampleTicker := time.NewTicker(time.Second)
var nonce uint32
out:
for {
// Pause state
T.Ln("worker pausing")
pausing:
for {
select {
// case <-sampleTicker.C:
// // w.hashReport()
// break
case <-w.stopChan.Wait():
D.Ln("received pause signal while paused")
// drain stop channel in pause
break
case <-w.startChan.Wait():
D.Ln("received start signal")
break pausing
case <-w.quit.Wait():
D.Ln("quitting")
break out
}
}
// Run state
T.Ln("worker running")
running:
for {
select {
// case <-sampleTicker.C:
// // w.hashReport()
// break
case <-w.startChan.Wait():
D.Ln("received start signal while running")
// drain start channel in run mode
break
case <-w.stopChan.Wait():
D.Ln("received pause signal while running")
break running
case <-w.quit.Wait():
D.Ln("worker stopping while running")
break out
default:
if w.templatesMessage == nil || !w.dispatchReady.Load() {
D.Ln("not ready to work")
} else {
// I.Ln("starting mining round")
newHeight := w.templatesMessage.Height
vers := w.roller.GetAlgoVer(newHeight)
nonce++
tn := time.Now().Round(time.Second)
if tn.After(w.templatesMessage.Timestamp.Round(time.Second)) {
w.templatesMessage.Timestamp = tn
}
if w.roller.C.Load()%w.roller.RoundsPerAlgo.Load() == 0 {
D.Ln("switching algorithms", w.roller.C.Load())
// send out broadcast containing worker nonce and algorithm and count of blocks
w.hashCount.Store(w.hashCount.Load() + uint64(w.roller.RoundsPerAlgo.Load()))
hashReport := hashrate.Get(w.roller.RoundsPerAlgo.Load(), vers, newHeight, w.id)
e := w.dispatchConn.SendMany(
hashrate.Magic,
transport.GetShards(hashReport),
)
if e != nil {
}
// reseed the nonce
rand.Seed(time.Now().UnixNano())
nonce = rand.Uint32()
select {
case <-w.quit.Wait():
D.Ln("breaking out of work loop")
break out
case <-w.stopChan.Wait():
D.Ln("received pause signal while running")
break running
default:
}
}
blockHeader := w.templatesMessage.GenBlockHeader(vers)
blockHeader.Nonce = nonce
// D.S(w.templatesMessage)
// D.S(blockHeader)
hash := blockHeader.BlockHashWithAlgos(newHeight)
bigHash := blockchain.HashToBig(&hash)
if bigHash.Cmp(bits.CompactToBig(blockHeader.Bits)) <= 0 {
D.Ln("found solution", newHeight, w.templatesMessage.Nonce, w.templatesMessage.UUID)
srs := sol.Encode(w.templatesMessage.Nonce, w.templatesMessage.UUID, blockHeader)
e := w.dispatchConn.SendMany(
sol.Magic,
transport.GetShards(srs),
)
if e != nil {
}
D.Ln("sent solution")
w.templatesMessage = nil
select {
case <-w.quit.Wait():
D.Ln("breaking out of work loop")
break out
default:
}
break running
}
// D.Ln("completed mining round")
}
}
}
}
D.Ln("worker finished")
interrupt.Request()
}
// New initialises the state for a worker, loading the work function handler that runs a round of processing between
// checking quit signal and work semaphore
func New(id string, quit qu.C, uuid uint64) (w *Worker, conn net.Conn) {
// log.L.SetLevel("trace", true)
sc := pipe.New(os.Stdin, os.Stdout, quit)
return NewWithConnAndSemaphore(id, sc, quit, uuid), sc
}
// NewJob is a delivery of a new job for the worker, this makes the miner start
// mining from pause or pause, prepare the work and restart
func (w *Worker) NewJob(j *templates.Message, reply *bool) (e error) {
// T.Ln("received new job")
if !w.dispatchReady.Load() {
D.Ln("dispatch not ready")
*reply = true
return
}
if w.templatesMessage != nil {
if j.PrevBlock == w.templatesMessage.PrevBlock {
// T.Ln("not a new job")
*reply = true
return
}
}
// D.S(j)
*reply = true
D.Ln("halting current work")
w.stopChan <- struct{}{}
D.Ln("halt signal sent")
// load the job into the template
if w.templatesMessage == nil {
w.templatesMessage = j
} else {
*w.templatesMessage = *j
}
D.Ln("switching to new job")
w.startChan <- struct{}{}
D.Ln("start signal sent")
return
}
// Pause signals the worker to stop working, releases its semaphore and the worker is then idle
func (w *Worker) Pause(_ int, reply *bool) (e error) {
T.Ln("pausing from IPC")
w.running.Store(false)
w.stopChan <- struct{}{}
*reply = true
return
}
// Stop signals the worker to quit
func (w *Worker) Stop(_ int, reply *bool) (e error) {
D.Ln("stopping from IPC")
w.stopChan <- struct{}{}
defer w.quit.Q()
*reply = true
// time.Sleep(time.Second * 3)
// os.Exit(0)
return
}
// SendPass gives the encryption key configured in the kopach controller ( pod) configuration to allow workers to
// dispatch their solutions
func (w *Worker) SendPass(pass []byte, reply *bool) (e error) {
D.Ln("receiving dispatch password", pass)
rand.Seed(time.Now().UnixNano())
// sp := fmt.Sprint(rand.Intn(32767) + 1025)
// rp := fmt.Sprint(rand.Intn(32767) + 1025)
var conn *transport.Channel
conn, e = transport.NewBroadcastChannel(
"kopachworker",
w,
pass,
transport.DefaultPort,
constant.MaxDatagramSize,
transport.Handlers{},
w.quit,
)
if e != nil {
}
w.dispatchConn = conn
w.dispatchReady.Store(true)
*reply = true
return
}

43
cmd/kopach/worker/log.go Normal file
View File

@@ -0,0 +1,43 @@
package worker
import (
"github.com/p9c/p9/pkg/log"
"github.com/p9c/p9/version"
)
var subsystem = log.AddLoggerSubsystem(version.PathBase)
var F, E, W, I, D, T log.LevelPrinter = log.GetLogPrinterSet(subsystem)
func init() {
// to filter out this package, uncomment the following
// var _ = logg.AddFilteredSubsystem(subsystem)
// to highlight this package, uncomment the following
// var _ = logg.AddHighlightedSubsystem(subsystem)
// these are here to test whether they are working
// F.Ln("F.Ln")
// E.Ln("E.Ln")
// W.Ln("W.Ln")
// I.Ln("I.Ln")
// D.Ln("D.Ln")
// F.Ln("T.Ln")
// F.F("%s", "F.F")
// E.F("%s", "E.F")
// W.F("%s", "W.F")
// I.F("%s", "I.F")
// D.F("%s", "D.F")
// T.F("%s", "T.F")
// F.C(func() string { return "F.C" })
// E.C(func() string { return "E.C" })
// W.C(func() string { return "W.C" })
// I.C(func() string { return "I.C" })
// D.C(func() string { return "D.C" })
// T.C(func() string { return "T.C" })
// F.C(func() string { return "F.C" })
// E.Chk(errors.New("E.Chk"))
// W.Chk(errors.New("W.Chk"))
// I.Chk(errors.New("I.Chk"))
// D.Chk(errors.New("D.Chk"))
// T.Chk(errors.New("T.Chk"))
}

24
cmd/misc/glom/LICENSE Normal file
View File

@@ -0,0 +1,24 @@
This is free and unencumbered software released into the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or
distribute this software, either in source code form or as a compiled
binary, for any purpose, commercial or non-commercial, and by any
means.
In jurisdictions that recognize copyright laws, the author or authors
of this software dedicate any and all copyright interest in the
software to the public domain. We make this dedication for the benefit
of the public at large and to the detriment of our heirs and
successors. We intend this dedication to be an overt act of
relinquishment in perpetuity of all present and future rights to this
software under copyright law.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR
OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR
OTHER DEALINGS IN THE SOFTWARE.
For more information, please refer to <https://unlicense.org>

15
cmd/misc/glom/README.md Normal file
View File

@@ -0,0 +1,15 @@
# glom
Glom is a command line shell, code and script editor and application platform in one.
It keeps a branching history of commands entered, which are either directly Go expressions or API calls in a arbitrary textual form similar to command line options, and can be executed immediately, or deferred until a testable unit is constructed.
The branching logs of commands entered can be grouped into functions, and functions grouped into packages, and then published to distributed content addressable repositories.
Glom stores the content reference to called functions as at the last time they were compiled, ending versioning hell.
Glom allows you to link any and all content to these packages, one can keep a dev journal, write guides, attach video, photographs and so on, right next to the code using document editors that from an early point will be created.
Last, but not least, it hosts the content you want to share with the world via IPFS or similar peer to peer protocol, the whole relevant history, or just the current state, the code is fingerprinted by its content, signed by its authors.
With it's friendly navigation, even non programmers will be able to write simple linear scripts composed of variables and API calls, and with easy auditing the security of apps created and distributed this way, every last statement can be accounted for and malicious code and its authors pruned from the tree.

40
cmd/misc/glom/glom.go Normal file
View File

@@ -0,0 +1,40 @@
package main
import (
l "github.com/p9c/p9/pkg/gel/gio/layout"
"github.com/p9c/p9/pkg/qu"
"github.com/p9c/p9/cmd/misc/glom/pkg/pathtree"
"github.com/p9c/p9/pkg/gel"
"github.com/p9c/p9/pkg/interrupt"
)
type State struct {
*gel.Window
}
func NewState(quit qu.C) *State {
return &State{
Window: gel.NewWindowP9(quit),
}
}
func main() {
quit := qu.T()
state := NewState(quit)
var e error
folderView := pathtree.New(state.Window)
state.Window.SetDarkTheme(folderView.Dark.True())
if e = state.Window.
Size(48, 32).
Title("glom, the visual code editor").
Open().
Run(func(gtx l.Context) l.Dimensions { return folderView.Fn(gtx) }, func() {
interrupt.Request()
quit.Q()
}, quit,
); E.Chk(e) {
}
}

43
cmd/misc/glom/log.go Normal file
View File

@@ -0,0 +1,43 @@
package main
import (
"github.com/p9c/p9/pkg/log"
"github.com/p9c/p9/version"
)
var subsystem = log.AddLoggerSubsystem(version.PathBase)
var F, E, W, I, D, T log.LevelPrinter = log.GetLogPrinterSet(subsystem)
func init() {
// to filter out this package, uncomment the following
// var _ = logg.AddFilteredSubsystem(subsystem)
// to highlight this package, uncomment the following
// var _ = logg.AddHighlightedSubsystem(subsystem)
// these are here to test whether they are working
// F.Ln("F.Ln")
// E.Ln("E.Ln")
// W.Ln("W.Ln")
// I.Ln("I.Ln")
// D.Ln("D.Ln")
// F.Ln("T.Ln")
// F.F("%s", "F.F")
// E.F("%s", "E.F")
// W.F("%s", "W.F")
// I.F("%s", "I.F")
// D.F("%s", "D.F")
// T.F("%s", "T.F")
// F.C(func() string { return "F.C" })
// E.C(func() string { return "E.C" })
// W.C(func() string { return "W.C" })
// I.C(func() string { return "I.C" })
// D.C(func() string { return "D.C" })
// T.C(func() string { return "T.C" })
// F.C(func() string { return "F.C" })
// E.Chk(errors.New("E.Chk"))
// W.Chk(errors.New("W.Chk"))
// I.Chk(errors.New("I.Chk"))
// D.Chk(errors.New("D.Chk"))
// T.Chk(errors.New("T.Chk"))
}

View File

@@ -0,0 +1,43 @@
package pathtree
import (
"github.com/p9c/p9/pkg/log"
"github.com/p9c/p9/version"
)
var subsystem = log.AddLoggerSubsystem(version.PathBase)
var F, E, W, I, D, T log.LevelPrinter = log.GetLogPrinterSet(subsystem)
func init() {
// to filter out this package, uncomment the following
// var _ = logg.AddFilteredSubsystem(subsystem)
// to highlight this package, uncomment the following
// var _ = logg.AddHighlightedSubsystem(subsystem)
// these are here to test whether they are working
// F.Ln("F.Ln")
// E.Ln("E.Ln")
// W.Ln("W.Ln")
// I.Ln("I.Ln")
// D.Ln("D.Ln")
// F.Ln("T.Ln")
// F.F("%s", "F.F")
// E.F("%s", "E.F")
// W.F("%s", "W.F")
// I.F("%s", "I.F")
// D.F("%s", "D.F")
// T.F("%s", "T.F")
// F.C(func() string { return "F.C" })
// E.C(func() string { return "E.C" })
// W.C(func() string { return "W.C" })
// I.C(func() string { return "I.C" })
// D.C(func() string { return "D.C" })
// T.C(func() string { return "T.C" })
// F.C(func() string { return "F.C" })
// E.Chk(errors.New("E.Chk"))
// W.Chk(errors.New("W.Chk"))
// I.Chk(errors.New("I.Chk"))
// D.Chk(errors.New("D.Chk"))
// T.Chk(errors.New("T.Chk"))
}

View File

@@ -0,0 +1,271 @@
package pathtree
import (
l "github.com/p9c/p9/pkg/gel/gio/layout"
"github.com/p9c/p9/pkg/gel/gio/text"
uberatomic "go.uber.org/atomic"
"golang.org/x/exp/shiny/materialdesign/icons"
"github.com/p9c/p9/pkg/opts/binary"
"github.com/p9c/p9/pkg/opts/meta"
"github.com/p9c/p9/pkg/gel"
)
type Widget struct {
*gel.Window
*gel.App
activePage *uberatomic.String
sidebarButtons []*gel.Clickable
statusBarButtons []*gel.Clickable
buttonBarButtons []*gel.Clickable
Size *uberatomic.Int32
}
func New(w *gel.Window) (wg *Widget) {
activePage := uberatomic.NewString("home")
w.Dark = binary.New(meta.Data{}, false, func(b bool) error { return nil })
w.Colors.SetDarkTheme(false)
// I.S(w.Colors)
app := w.App(w.Width, activePage, 48)
wg = &Widget{
Window: w,
App: app,
activePage: uberatomic.NewString("home"),
Size: w.Width,
}
wg.GetButtons()
app.Pages(
map[string]l.Widget{
"home": wg.Page(
"home", gel.Widgets{
// p9.WidgetSize{Widget: p9.EmptyMaxHeight()},
gel.WidgetSize{
Widget: wg.Flex().Flexed(1, wg.H3("glom").Fn).Fn,
},
},
),
},
)
app.SideBar(
[]l.Widget{
// wg.SideBarButton(" ", " ", 11),
wg.SideBarButton("home", "home", 0),
},
)
app.ButtonBar(
[]l.Widget{
wg.PageTopBarButton(
"help", 0, &icons.ActionHelp, func(name string) {
}, app, "",
),
wg.PageTopBarButton(
"home", 1, &icons.ActionLockOpen, func(name string) {
wg.App.ActivePage(name)
}, app, "green",
),
// wg.Flex().Rigid(wg.Inset(0.5, gel.EmptySpace(0, 0)).Fn).Fn,
// wg.PageTopBarButton(
// "quit", 3, &icons.ActionExitToApp, func(name string) {
// wg.MainApp.ActivePage(name)
// }, a, "",
// ),
},
)
app.StatusBar(
[]l.Widget{
wg.StatusBarButton(
"log", 0, &icons.ActionList, func(name string) {
D.Ln("click on button", name)
}, app,
),
},
[]l.Widget{
wg.StatusBarButton(
"settings", 1, &icons.ActionSettings, func(name string) {
D.Ln("click on button", name)
}, app,
),
},
)
return
}
func (w *Widget) Fn(gtx l.Context) l.Dimensions {
return w.App.Fn()(gtx)
}
func (w *Widget) Page(title string, widget gel.Widgets) func(gtx l.Context) l.Dimensions {
return func(gtx l.Context) l.Dimensions {
return w.VFlex().
// SpaceEvenly().
Rigid(
w.Responsive(
w.Size.Load(), gel.Widgets{
// p9.WidgetSize{
// Widget: a.ButtonInset(0.25, a.H5(title).Color(wg.App.BodyColorGet()).Fn).Fn,
// },
gel.WidgetSize{
// Size: 800,
Widget: gel.EmptySpace(0, 0),
// a.ButtonInset(0.25, a.Caption(title).Color(wg.BodyColorGet()).Fn).Fn,
},
},
).Fn,
).
Flexed(
1,
w.Inset(
0.25,
w.Responsive(w.Size.Load(), widget).Fn,
).Fn,
).Fn(gtx)
}
}
func (wg *Widget) GetButtons() {
wg.sidebarButtons = make([]*gel.Clickable, 2)
// wg.walletLocked.Store(true)
for i := range wg.sidebarButtons {
wg.sidebarButtons[i] = wg.Clickable()
}
wg.buttonBarButtons = make([]*gel.Clickable, 2)
for i := range wg.buttonBarButtons {
wg.buttonBarButtons[i] = wg.Clickable()
}
wg.statusBarButtons = make([]*gel.Clickable, 2)
for i := range wg.statusBarButtons {
wg.statusBarButtons[i] = wg.Clickable()
}
}
func (w *Widget) SideBarButton(title, page string, index int) func(gtx l.Context) l.Dimensions {
return func(gtx l.Context) l.Dimensions {
var scale float32
scale = gel.Scales["H6"]
var color string
background := "Transparent"
color = "DocText"
var ins float32 = 0.5
// var hl = false
if w.App.ActivePageGet() == page || w.App.PreRendering {
background = "PanelBg"
scale = gel.Scales["H6"]
color = "DocText"
// ins = 0.5
// hl = true
}
if title == " " {
scale = gel.Scales["H6"] / 2
}
max := int(w.App.SideBarSize.V)
if max > 0 {
gtx.Constraints.Max.X = max
gtx.Constraints.Min.X = max
}
// D.Ln("sideMAXXXXXX!!", max)
return w.Direction().E().Embed(
w.ButtonLayout(w.sidebarButtons[index]).
CornerRadius(scale).Corners(0).
Background(background).
Embed(
w.Inset(
ins,
func(gtx l.Context) l.Dimensions {
return w.H5(title).
Color(color).
Alignment(text.End).
Fn(gtx)
},
).Fn,
).
SetClick(
func() {
if w.App.MenuOpen {
w.App.MenuOpen = false
}
w.App.ActivePage(page)
},
).
Fn,
).
Fn(gtx)
}
}
func (w *Widget) PageTopBarButton(
name string, index int, ico *[]byte, onClick func(string), app *gel.App,
highlightColor string,
) func(gtx l.Context) l.Dimensions {
return func(gtx l.Context) l.Dimensions {
background := "Transparent"
// background := node.TitleBarBackgroundGet()
color := app.MenuColorGet()
if app.ActivePageGet() == name {
color = "PanelText"
// background = "scrim"
background = "PanelBg"
}
// if name == "home" {
// background = "scrim"
// }
if highlightColor != "" {
color = highlightColor
}
ic := w.Icon().
Scale(gel.Scales["H5"]).
Color(color).
Src(ico).
Fn
return w.Flex().Rigid(
// wg.ButtonInset(0.25,
w.ButtonLayout(w.buttonBarButtons[index]).
CornerRadius(0).
Embed(
w.Inset(
0.375,
ic,
).Fn,
).
Background(background).
SetClick(func() { onClick(name) }).
Fn,
// ).Fn,
).Fn(gtx)
}
}
func (w *Widget) StatusBarButton(
name string,
index int,
ico *[]byte,
onClick func(string),
app *gel.App,
) func(gtx l.Context) l.Dimensions {
return func(gtx l.Context) l.Dimensions {
background := app.StatusBarBackgroundGet()
color := app.StatusBarColorGet()
if app.ActivePageGet() == name {
// background, color = color, background
background = "PanelBg"
// color = "Danger"
}
ic := w.Icon().
Scale(gel.Scales["H5"]).
Color(color).
Src(ico).
Fn
return w.Flex().
Rigid(
w.ButtonLayout(w.statusBarButtons[index]).
CornerRadius(0).
Embed(
w.Inset(0.25, ic).Fn,
).
Background(background).
SetClick(func() { onClick(name) }).
Fn,
).Fn(gtx)
}
}

50
cmd/misc/jrnl/main.go Normal file
View File

@@ -0,0 +1,50 @@
// This is just a convenient cli command to automatically generate a new file
// for a journal entry with names based on unix timestamps
package main
import (
"encoding/json"
"fmt"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
"time"
)
type jrnlCfg struct {
Root string
}
func printErrorAndDie(stuff ...interface{}) {
fmt.Fprintln(os.Stderr, stuff)
os.Exit(1)
}
func main() {
var home string
var e error
if home, e = os.UserHomeDir(); e != nil {
os.Exit(1)
}
var configFile []byte
if configFile, e = ioutil.ReadFile(
filepath.Join(home, ".jrnl")); e != nil {
printErrorAndDie(e, "~/.jrnl configuration file not found")
}
var cfg jrnlCfg
if e = json.Unmarshal(configFile, &cfg); e != nil {
printErrorAndDie(e, "~/.jrnl config file did not unmarshal")
}
filename := filepath.Join(cfg.Root, fmt.Sprintf("jrnl%d.txt",
time.Now().Unix()))
if e = ioutil.WriteFile(filename,
[]byte(time.Now().Format(time.RFC1123Z)+"\n\n"),
0600,
); e != nil {
printErrorAndDie(e,
"unable to create file (is your keybase filesystem mounted?")
os.Exit(1)
}
exec.Command("gedit", filename).Run()
}

13
cmd/misc/swd/main.go Normal file
View File

@@ -0,0 +1,13 @@
package main
import (
"io/ioutil"
"os"
"path/filepath"
)
func main() {
cwd, _ := os.Getwd()
home, _ := os.UserHomeDir()
ioutil.WriteFile(filepath.Join(home, ".cwd"), []byte(cwd), 0600)
}

941
cmd/node/CHANGES Executable file
View File

@@ -0,0 +1,941 @@
============================================================================
User visible changes for pod
A full-node bitcoin implementation written in Go
============================================================================
Changes in 0.12.0 (Fri Nov 20 2015)
- Protocol and network related changes:
- Add a new checkpoint at block height 382320 (#555)
- Implement BIP0065 which includes support for version 4 blocks, a new
consensus opcode (OP_CHECKLOCKTIMEVERIFY) that enforces transaction
lock times, and a double-threshold switchover mechanism (#535, #459,
#455)
- Implement BIP0111 which provides a new bloom filter service flag and
hence provides support for protocol version 70011 (#499)
- Add a new parameter --nopeerbloomfilters to allow disabling bloom
filter support (#499)
- Reject non-canonically encoded variable length integers (#507)
- Add mainnet peer discovery DNS seed (seed.bitcoin.jonasschnelli.ch)
(#496)
- Correct reconnect handling for persistent peers (#463, #464)
- Ignore requests for block headers if not fully synced (#444)
- Add CLI support for specifying the zone id on IPv6 addresses (#538)
- Fix a couple of issues where the initial block sync could stall (#518,
#229, #486)
- Fix an issue which prevented the --onion option from working as
intended (#446)
- Transaction relay (memory pool) changes:
- Require transactions to only include signatures encoded with the
canonical 'low-s' encoding (#512)
- Add a new parameter --minrelaytxfee to allow the minimum transaction
fee in DUO/kB to be overridden (#520)
- Retain memory pool transactions when they redeem another one that is
removed when a block is accepted (#539)
- Do not send reject messages for a transaction if it is valid but
causes an orphan transaction which depends on it to be determined
as invalid (#546)
- Refrain from attempting to add orphans to the memory pool multiple
times when the transaction they redeem is added (#551)
- Modify minimum transaction fee calculations to scale based on bytes
instead of full kilobyte boundaries (#521, #537)
- Implement signature cache:
- Provides a limited memory cache of validated signatures which is a
huge optimization when verifying blocks for transactions that are
already in the memory pool (#506)
- Add a new parameter '--sigcachemaxsize' which allows the size of the
new cache to be manually changed if desired (#506)
- Mining support changes:
- Notify getblocktemplate long polling clients when a block is pushed
via submitblock (#488)
- Speed up getblocktemplate by making use of the new signature cache
(#506)
- RPC changes:
- Implement getmempoolinfo command (#453)
- Implement getblockheader command (#461)
- Modify createrawtransaction command to accept a new optional parameter
'locktime' (#529)
- Modify listunspent result to include the 'spendable' field (#440)
- Modify getinfo command to include 'errors' field (#511)
- Add timestamps to blockconnected and blockdisconnected notifications
(#450)
- Several modifications to searchrawtranscations command:
- Accept a new optional parameter 'vinextra' which causes the results
to include information about the outputs referenced by a transaction's
inputs (#485, #487)
- Skip entries in the mempool too (#495)
- Accept a new optional parameter 'reverse' to return the results in
reverse order (most recent to oldest) (#497)
- Accept a new optional parameter 'filteraddrs' which causes the
results to only include inputs and outputs which involve the
provided addresses (#516)
- Change the notification order to notify clients about mined
transactions (recvtx, redeemingtx) before the blockconnected
notification (#449)
- Update verifymessage RPC to use the standard algorithm so it is
compatible with other implementations (#515)
- Improve ping statistics by pinging on an interval (#517)
- Websocket changes:
- Implement session command which returns a per-session unique id (#500,
#503)
- podctl utility changes:
- Add getmempoolinfo command (#453)
- Add getblockheader command (#461)
- Add getwalletinfo command (#471)
- Notable developer-related package changes:
- Introduce a new peer package which acts a common base for creating and
concurrently managing bitcoin network peers (#445)
- Various cleanup of the new peer package (#528, #531, #524, #534,
#549)
- Blocks heights now consistently use int32 everywhere (#481)
- The BlockHeader type in the wire package now provides the BtcDecode
and BtcEncode methods (#467)
- Update wire package to recognize BIP0064 (getutxo) service bit (#489)
- Export LockTimeThreshold constant from txscript package (#454)
- Export MaxDataCarrierSize constant from txscript package (#466)
- Provide new IsUnspendable function from the txscript package (#478)
- Export variable length string functions from the wire package (#514)
- Export DNS Seeds for each network from the chaincfg package (#544)
- Preliminary work towards separating the memory pool into a separate
package (#525, #548)
- Misc changes:
- Various documentation updates (#442, #462, #465, #460, #470, #473,
#505, #530, #545)
- Add installation instructions for gentoo (#542)
- Ensure an error is shown if OS limits can't be set at startup (#498)
- Tighten the standardness checks for multisig scripts (#526)
- Test coverage improvement (#468, #494, #527, #543, #550)
- Several optimizations (#457, #474, #475, #476, #508, #509)
- Minor code cleanup and refactoring (#472, #479, #482, #519, #540)
- Contributors (alphabetical order):
- Ben Echols
- Bruno Clermont
- danda
- Daniel Krawisz
- Dario Nieuwenhuis
- Dave Collins
- David Hill
- Javed Khan
- Jonathan Gillham
- Joseph Becher
- Josh Rickmar
- Justus Ranvier
- Mawuli Adzoe
- Olaoluwa Osuntokun
- Rune T. Aune
Changes in 0.11.1 (Wed May 27 2015)
- Protocol and network related changes:
- Use correct sub-command in reject message for rejected transactions
(#436, #437)
- Add a new parameter --torisolation which forces new circuits for each
connection when using tor (#430)
- Transaction relay (memory pool) changes:
- Reduce the default number max number of allowed orphan transactions
to 1000 (#419)
- Add a new parameter --maxorphantx which allows the maximum number of
orphan transactions stored in the mempool to be specified (#419)
- RPC changes:
- Modify listtransactions result to include the 'involveswatchonly' and
'vout' fields (#427)
- Update getrawtransaction result to omit the 'confirmations' field
when it is 0 (#420, #422)
- Update signrawtransaction result to include errors (#423)
- podctl utility changes:
- Add gettxoutproof command (#428)
- Add verifytxoutproof command (#428)
- Notable developer-related package changes:
- The btcec package now provides the ability to perform ECDH
encryption and decryption (#375)
- The block and header validation in the blockchain package has been
split to help pave the way toward concurrent downloads (#386)
- Misc changes:
- Minor peer optimization (#433)
- Contributors (alphabetical order):
- Dave Collins
- David Hill
- Federico Bond
- Ishbir Singh
- Josh Rickmar
Changes in 0.11.0 (Wed May 06 2015)
- Protocol and network related changes:
- **IMPORTANT: Update is required due to the following point**
- Correct a few corner cases in script handling which could result in
forking from the network on non-standard transactions (#425)
- Add a new checkpoint at block height 352940 (#418)
- Optimized script execution (#395, #400, #404, #409)
- Fix a case that could lead stalled syncs (#138, #296)
- Network address manager changes:
- Implement eclipse attack countermeasures as proposed in
http://cs-people.bu.edu/heilman/eclipse (#370, #373)
- Optional address indexing changes:
- Fix an issue where a reorg could cause an orderly shutdown when the
address index is active (#340, #357)
- Transaction relay (memory pool) changes:
- Increase maximum allowed space for nulldata transactions to 80 bytes
(#331)
- Implement support for the following rules specified by BIP0062:
- The S value in ECDSA signature must be at most half the curve order
(rule 5) (#349)
- Script execution must result in a single non-zero value on the stack
(rule 6) (#347)
- NOTE: All 7 rules of BIP0062 are now implemented
- Use network adjusted time in finalized transaction checks to improve
consistency across nodes (#332)
- Process orphan transactions on acceptance of new transactions (#345)
- RPC changes:
- Add support for a limited RPC user which is not allowed admin level
operations on the server (#363)
- Implement node command for more unified control over connected peers
(#79, #341)
- Implement generate command for regtest/simnet to support
deterministically mining a specified number of blocks (#362, #407)
- Update searchrawtransactions to return the matching transactions in
order (#354)
- Correct an issue with searchrawtransactions where it could return
duplicates (#346, #354)
- Increase precision of 'difficulty' field in getblock result to 8
(#414, #415)
- Omit 'nextblockhash' field from getblock result when it is empty
(#416, #417)
- Add 'id' and 'timeoffset' fields to getpeerinfo result (#335)
- Websocket changes:
- Implement new commands stopnotifyspent, stopnotifyreceived,
stopnotifyblocks, and stopnotifynewtransactions to allow clients to
cancel notification registrations (#122, #342)
- podctl utility changes:
- A single dash can now be used as an argument to cause that argument to
be read from stdin (#348)
- Add generate command
- Notable developer-related package changes:
- The new version 2 btcjson package has now replaced the deprecated
version 1 package (#368)
- The btcec package now performs all signing using RFC6979 deterministic
signatures (#358, #360)
- The txscript package has been significantly cleaned up and had a few
API changes (#387, #388, #389, #390, #391, #392, #393, #395, #396,
#400, #403, #404, #405, #406, #408, #409, #410, #412)
- A new PkScriptLocs function has been added to the wire package MsgTx
type which provides callers that deal with scripts optimization
opportunities (#343)
- Misc changes:
- Minor wire hashing optimizations (#366, #367)
- Other minor internal optimizations
- Contributors (alphabetical order):
- Alex Akselrod
- Arne Brutschy
- Chris Jepson
- Daniel Krawisz
- Dave Collins
- David Hill
- Jimmy Song
- Jonas Nick
- Josh Rickmar
- Olaoluwa Osuntokun
- Oleg Andreev
Changes in 0.10.0 (Sun Mar 01 2015)
- Protocol and network related changes:
- Add a new checkpoint at block height 343185
- Implement BIP066 which includes support for version 3 blocks, a new
consensus rule which prevents non-DER encoded signatures, and a
double-threshold switchover mechanism
- Rather than announcing all known addresses on getaddr requests which
can possibly result in multiple messages, randomize the results and
limit them to the max allowed by a single message (1000 addresses)
- Add more reserved IP spaces to the address manager
- Transaction relay (memory pool) changes:
- Make transactions which contain reserved opcodes nonstandard
- No longer accept or relay free and low-fee transactions that have
insufficient priority to be mined in the next block
- Implement support for the following rules specified by BIP0062:
- ECDSA signature must use strict DER encoding (rule 1)
- The signature script must only contain push operations (rule 2)
- All push operations must use the smallest possible encoding (rule 3)
- All stack values interpreted as a number must be encoding using the
shortest possible form (rule 4)
- NOTE: Rule 1 was already enforced, however the entire script now
evaluates to false rather than only the signature verification as
required by BIP0062
- Allow transactions with nulldata transaction outputs to be treated as
standard
- Mining support changes:
- Modify the getblocktemplate RPC to generate and return block templates
for version 3 blocks which are compatible with BIP0066
- Allow getblocktemplate to serve blocks when the current time is
less than the minimum allowed time for a generated block template
(https://github.com/p9c/p9/issues/209)
- Crypto changes:
- Optimize scalar multiplication by the base point by using a
pre-computed table which results in approximately a 35% speedup
(https://github.com/btcsuite/btcec/issues/2)
- Optimize general scalar multiplication by using the secp256k1
endomorphism which results in approximately a 17-20% speedup
(https://github.com/btcsuite/btcec/issues/1)
- Optimize general scalar multiplication by using non-adjacent form
which results in approximately an additional 8% speedup
(https://github.com/btcsuite/btcec/issues/3)
- Implement optional address indexing:
- Add a new parameter --addrindex which will enable the creation of an
address index which can be queried to determine all transactions which
involve a given address
(https://github.com/p9c/p9/issues/190)
- Add a new logging subsystem for address index related operations
- Support new searchrawtransactions RPC
(https://github.com/p9c/p9/issues/185)
- RPC changes:
- Require TLS version 1.2 as the minimum version for all TLS connections
- Provide support for disabling TLS when only listening on localhost
(https://github.com/p9c/p9/pull/192)
- Modify help output for all commands to provide much more consistent
and detailed information
- Correct case in getrawtransaction which would refuse to serve certain
transactions with invalid scripts
(https://github.com/p9c/p9/issues/210)
- Correct error handling in the getrawtransaction RPC which could lead
to a crash in rare cases
(https://github.com/p9c/p9/issues/196)
- Update getinfo RPC to include the appropriate 'timeoffset' calculated
from the median network time
- Modify listreceivedbyaddress result type to include txids field so it
is compatible
- Add 'iswatchonly' field to validateaddress result
- Add 'startingpriority' and 'currentpriority' fields to getrawmempool
(https://github.com/p9c/p9/issues/178)
- Don't omit the 'confirmations' field from getrawtransaction when it is
zero
- Websocket changes:
- Modify the behavior of the rescan command to automatically register
for notifications about transactions paying to rescanned addresses
or spending outputs from the final rescan utxo set when the rescan
is through the best block in the chain
- podctl utility changes:
- Make the list of commands available via the -l option rather than
dumping the entire list on usage errors
- Alphabetize and categorize the list of commands by chain and wallet
- Make the help option only show the help options instead of also
dumping all of the commands
- Make the usage syntax much more consistent and correct a few cases of
misnamed fields
(https://github.com/p9c/p9/issues/305)
- Improve usage errors to show the specific parameter number, reason,
and error code
- Only show the usage for specific command is shown when a valid command
is provided with invalid parameters
- Add support for a SOCK5 proxy
- Modify output for integer fields (such as timestamps) to display
normally instead in scientific notation
- Add invalidateblock command
- Add reconsiderblock command
- Add createnewaccount command
- Add renameaccount command
- Add searchrawtransactions command
- Add importaddress command
- Add importpubkey command
- showblock utility changes:
- Remove utility in favor of the RPC getblock method
- Notable developer-related package changes:
- Many of the core packages have been relocated into the pod repository
(https://github.com/p9c/p9/issues/214)
- A new version of the btcjson package that has been completely
redesigned from the ground up based based upon how the project has
evolved and lessons learned while using it since it was first written
is now available in the btcjson/v2/btcjson directory
- This will ultimately replace the current version so anyone making
use of this package will need to update their code accordingly
- The btcec package now provides better facilities for working directly
with its public and private keys without having to mix elements from
the ecdsa package
- Update the script builder to ensure all rules specified by BIP0062 are
adhered to when creating scripts
- The blockchain package now provides a MedianTimeSource interface and
concrete implementation for providing time samples from remote peers
and using that data to calculate an offset against the local time
- Misc changes:
- Fix a slow memory leak due to tickers not being stopped
(https://github.com/p9c/p9/issues/189)
- Fix an issue where a mix of orphans and SPV clients could trigger a
condition where peers would no longer be served
(https://github.com/p9c/p9/issues/231)
- The RPC username and password can now contain symbols which previously
conflicted with special symbols used in URLs
- Improve handling of obtaining random nonces to prevent cases where it
could error when not enough entropy was available
- Improve handling of home directory creation errors such as in the case
of unmounted symlinks (https://github.com/p9c/p9/issues/193)
- Improve the error reporting for rejected transactions to include the
inputs which are missing and/or being double spent
- Update sample config file with new options and correct a comment
regarding the fact the RPC server only listens on localhost by default
(https://github.com/p9c/p9/issues/218)
- Update the continuous integration builds to run several tools which
help keep code quality high
- Significant amount of internal code cleanup and improvements
- Other minor internal optimizations
- Code Contributors (alphabetical order):
- Beldur
- Ben Holden-Crowther
- Dave Collins
- David Evans
- David Hill
- Guilherme Salgado
- Javed Khan
- Jimmy Song
- John C. Vernaleo
- Jonathan Gillham
- Josh Rickmar
- Michael Ford
- Michail Kargakis
- kac
- Olaoluwa Osuntokun
Changes in 0.9.0 (Sat Sep 20 2014)
- Protocol and network related changes:
- Add a new checkpoint at block height 319400
- Add support for BIP0037 bloom filters
(https://github.com/conformal/pod/issues/132)
- Implement BIP0061 reject handling and hence support for protocol
version 70002 (https://github.com/conformal/pod/issues/133)
- Add testnet DNS seeds for peer discovery (testnet-seed.alexykot.me
and testnet-seed.bitcoin.schildbach.de)
- Add mainnet DNS seed for peer discovery (seeds.bitcoin.open-nodes.org)
- Make multisig transactions with non-null dummy data nonstandard
(https://github.com/conformal/pod/issues/131)
- Make transactions with an excessive number of signature operations
nonstandard
- Perform initial DNS lookups concurrently which allows connections
more quickly
- Improve the address manager to significantly reduce memory usage and
add tests
- Remove orphan transactions when they appear in a mined block
(https://github.com/conformal/pod/issues/166)
- Apply incremental back off on connection retries for persistent peers
that give invalid replies to mirror the logic used for failed
connections (https://github.com/conformal/pod/issues/103)
- Correct rate-limiting of free and low-fee transactions
- Mining support changes:
- Implement getblocktemplate RPC with the following support:
(https://github.com/conformal/pod/issues/124)
- BIP0022 Non-Optional Sections
- BIP0022 Long Polling
- BIP0023 Basic Pool Extensions
- BIP0023 Mutation coinbase/append
- BIP0023 Mutations time, time/increment, and time/decrement
- BIP0023 Mutation transactions/add
- BIP0023 Mutations prevblock, coinbase, and generation
- BIP0023 Block Proposals
- Implement built-in concurrent CPU miner
(https://github.com/conformal/pod/issues/137)
NOTE: CPU mining on mainnet is pointless. This has been provided
for testing purposes such as for the new simulation test network
- Add --generate flag to enable CPU mining
- Deprecate the --getworkkey flag in favor of --miningaddr which
specifies which addresses generated blocks will choose from to pay
the subsidy to
- RPC changes:
- Implement gettxout command
(https://github.com/conformal/pod/issues/141)
- Implement validateaddress command
- Implement verifymessage command
- Mark getunconfirmedbalance RPC as wallet-only
- Mark getwalletinfo RPC as wallet-only
- Update getgenerate, setgenerate, gethashespersec, and getmininginfo
to return the appropriate information about new CPU mining status
- Modify getpeerinfo pingtime and pingwait field types to float64 so
they are compatible
- Improve disconnect handling for normal HTTP clients
- Make error code returns for invalid hex more consistent
- Websocket changes:
- Switch to a new more efficient websocket package
(https://github.com/conformal/pod/issues/134)
- Add rescanfinished notification
- Modify the rescanprogress notification to include block hash as well
as height (https://github.com/conformal/pod/issues/151)
- podctl utility changes:
- Accept --simnet flag which automatically selects the appropriate port
and TLS certificates needed to communicate with pod and btcwallet on
the simulation test network
- Fix createrawtransaction command to send amounts denominated in DUO
- Add estimatefee command
- Add estimatepriority command
- Add getmininginfo command
- Add getnetworkinfo command
- Add gettxout command
- Add lockunspent command
- Add signrawtransaction command
- addblock utility changes:
- Accept --simnet flag which automatically selects the appropriate port
and TLS certificates needed to communicate with pod and btcwallet on
the simulation test network
- Notable developer-related package changes:
- Provide a new bloom package in btcutil which allows creating and
working with BIP0037 bloom filters
- Provide a new hdkeychain package in btcutil which allows working with
BIP0032 hierarchical deterministic key chains
- Introduce a new btcnet package which houses network parameters
- Provide new simnet network (--simnet) which is useful for private
simulation testing
- Enforce low S values in serialized signatures as detailed in BIP0062
- Return errors from all methods on the podb.Db interface
(https://github.com/conformal/podb/issues/5)
- Allow behavior flags to alter btcchain.ProcessBlock
(https://github.com/conformal/btcchain/issues/5)
- Provide a new SerializeSize API for blocks
(https://github.com/conformal/btcwire/issues/19)
- Several of the core packages now work with Google App Engine
- Misc changes:
- Correct an issue where the database could corrupt under certain
circumstances which would require a new chain download
- Slightly optimize deserialization
- Use the correct IP block for he.net
- Fix an issue where it was possible the block manager could hang on
shutdown
- Update sample config file so the comments are on a separate line
rather than the end of a line so they are not interpreted as settings
(https://github.com/conformal/pod/issues/135)
- Correct an issue where getdata requests were not being properly
throttled which could lead to larger than necessary memory usage
- Always show help when given the help flag even when the config file
contains invalid entries
- General code cleanup and minor optimizations
Changes in 0.8.0-beta (Sun May 25 2014)
- Pod is now Beta (https://github.com/conformal/pod/issues/130)
- Add a new checkpoint at block height 300255
- Protocol and network related changes:
- Lower the minimum transaction relay fee to 1000 satoshi to match
recent reference client changes
(https://github.com/conformal/pod/issues/100)
- Raise the maximum signature script size to support standard 15-of-15
multi-signature pay-to-sript-hash transactions with compressed pubkeys
to remain compatible with the reference client
(https://github.com/conformal/pod/issues/128)
- Reduce max bytes allowed for a standard nulldata transaction to 40 for
compatibility with the reference client
- Introduce a new btcnet package which houses all of the network params
for each network (mainnet, testnet3, regtest) to ultimately enable
easier addition and tweaking of networks without needing to change
several packages
- Fix several script discrepancies found by reference client test data
- Add new DNS seed for peer discovery (seed.bitnodes.io)
- Reduce the max known inventory cache from 20000 items to 1000 items
- Fix an issue where unknown inventory types could lead to a hung peer
- Implement inventory rebroadcast handler for sendrawtransaction
(https://github.com/conformal/pod/issues/99)
- Update user agent to fully support BIP0014
(https://github.com/conformal/btcwire/issues/10)
- Implement initial mining support:
- Add a new logging subsystem for mining related operations
- Implement infrastructure for creating block templates
- Provide options to control block template creation settings
- Support the getwork RPC
- Allow address identifiers to apply to more than one network since both
testnet3 and the regression test network unfortunately use the same
identifier
- RPC changes:
- Set the content type for HTTP POST RPC connections to application/json
(https://github.com/conformal/pod/issues/121)
- Modified the RPC server startup so it only requires at least one valid
listen interface
- Correct an error path where it was possible certain errors would not
be returned
- Implement getwork command
(https://github.com/conformal/pod/issues/125)
- Update sendrawtransaction command to reject orphans
- Update sendrawtransaction command to include the reason a transaction
was rejected
- Update getinfo command to populate connection count field
- Update getinfo command to include relay fee field
(https://github.com/conformal/pod/issues/107)
- Allow transactions submitted with sendrawtransaction to bypass the
rate limiter
- Allow the getcurrentnet and getbestblock extensions to be accessed via
HTTP POST in addition to Websockets
(https://github.com/conformal/pod/issues/127)
- Websocket changes:
- Rework notifications to ensure they are delivered in the order they
occur
- Rename notifynewtxs command to notifyreceived (funds received)
- Rename notifyallnewtxs command to notifynewtransactions
- Rename alltx notification to txaccepted
- Rename allverbosetx notification to txacceptedverbose
(https://github.com/conformal/pod/issues/98)
- Add rescan progress notification
- Add recvtx notification
- Add redeemingtx notification
- Modify notifyspent command to accept an array of outpoints
(https://github.com/conformal/pod/issues/123)
- Significantly optimize the rescan command to yield up to a 60x speed
increase
- podctl utility changes:
- Add createencryptedwallet command
- Add getblockchaininfo command
- Add importwallet command
- Add addmultisigaddress command
- Add setgenerate command
- Accept --testnet and --wallet flags which automatically select
the appropriate port and TLS certificates needed to communicate
with pod and btcwallet (https://github.com/conformal/pod/issues/112)
- Allow path expansion from config file entries
(https://github.com/conformal/pod/issues/113)
- Minor refactor simplify handling of options
- addblock utility changes:
- Improve logging by making it consistent with the logging provided by
pod (https://github.com/conformal/pod/issues/90)
- Improve several package APIs for developers:
- Add new amount type for consistently handling monetary values
- Add new coin selector API
- Add new WIF (Wallet Import Format) API
- Add new crypto types for private keys and signatures
- Add new API to sign transactions including script merging and hash
types
- Expose function to extract all pushed data from a script
(https://github.com/conformal/btcscript/issues/8)
- Misc changes:
- Optimize address manager shuffling to do 67% less work on average
- Resolve a couple of benign data races found by the race detector
(https://github.com/conformal/pod/issues/101)
- Add IP address to all peer related errors to clarify which peer is the
cause (https://github.com/conformal/pod/issues/102)
- Fix a UPNP case issue that prevented the --upnp option from working
with some UPNP servers
- Update documentation in the sample config file regarding debug levels
- Adjust some logging levels to improve debug messages
- Improve the throughput of query messages to the block manager
- Several minor optimizations to reduce GC churn and enhance speed
- Other minor refactoring
- General code cleanup
Changes in 0.7.0 (Thu Feb 20 2014)
- Fix an issue when parsing scripts which contain a multi-signature script
which require zero signatures such as testnet block
000000001881dccfeda317393c261f76d09e399e15e27d280e5368420f442632
(https://github.com/conformal/btcscript/issues/7)
- Add check to ensure all transactions accepted to mempool only contain
canonical data pushes (https://github.com/conformal/btcscript/issues/6)
- Fix an issue causing excessive memory consumption
- Significantly rework and improve the websocket notification system:
- Each client is now independent so slow clients no longer limit the
speed of other connected clients
- Potentially long-running operations such as rescans are now run in
their own handler and rate-limited to one operation at a time without
preventing simultaneous requests from the same client for the faster
requests or notifications
- A couple of scenarios which could cause shutdown to hang have been
resolved
- Update notifynewtx notifications to support all address types instead
of only pay-to-pubkey-hash
- Provide a --rpcmaxwebsockets option to allow limiting the number of
concurrent websocket clients
- Add a new websocket command notifyallnewtxs to request notifications
(https://github.com/conformal/pod/issues/86) (thanks @flammit)
- Improve podctl utility in the following ways:
- Add getnetworkhashps command
- Add gettransaction command (wallet-specific)
- Add signmessage command (wallet-specific)
- Update getwork command to accept
- Continue cleanup and work on implementing the RPC API:
- Implement getnettotals command
(https://github.com/conformal/pod/issues/84)
- Implement networkhashps command
(https://github.com/conformal/pod/issues/87)
- Update getpeerinfo to always include syncnode field even when false
- Remove help addenda for getpeerinfo now that it supports all fields
- Close standard RPC connections on auth failure
- Provide a --rpcmaxclients option to allow limiting the number of
concurrent RPC clients (https://github.com/conformal/pod/issues/68)
- Include IP address in RPC auth failure log messages
- Resolve a rather harmless data races found by the race detector
(https://github.com/conformal/pod/issues/94)
- Increase block priority size and max standard transaction size to 50k
and 100k, respectively (https://github.com/conformal/pod/issues/71)
- Add rate limiting of free transactions to the memory pool to prevent
penny flooding (https://github.com/conformal/pod/issues/40)
- Provide a --logdir option (https://github.com/conformal/pod/issues/95)
- Change the default log file path to include the network
- Add a new ScriptBuilder interface to btcscript to support creation of
custom scripts (https://github.com/conformal/btcscript/issues/5)
- General code cleanup
Changes in 0.6.0 (Tue Feb 04 2014)
- Fix an issue when parsing scripts which contain invalid signatures that
caused a chain fork on block
0000000000000001e4241fd0b3469a713f41c5682605451c05d3033288fb2244
- Correct an issue which could lead to an error in removeBlockNode
(https://github.com/conformal/btcchain/issues/4)
- Improve addblock utility as follows:
- Check imported blocks against all chain rules and checkpoints
- Skip blocks which are already known so you can stop and restart the
import or start the import after you have already downloaded a portion
of the chain
- Correct an issue where the utility did not shutdown cleanly after
processing all blocks
- Add error on attempt to import orphan blocks
- Improve error handling and reporting
- Display statistics after input file has been fully processed
- Rework, optimize, and improve headers-first mode:
- Resuming the chain sync from any point before the final checkpoint
will now use headers-first mode
(https://github.com/conformal/pod/issues/69)
- Verify all checkpoints as opposed to only the final one
- Reduce and bound memory usage
- Rollback to the last known good point when a header does not match a
checkpoint
- Log information about what is happening with headers
- Improve podctl utility in the following ways:
- Add getaddednodeinfo command
- Add getnettotals command
- Add getblocktemplate command (wallet-specific)
- Add getwork command (wallet-specific)
- Add getnewaddress command (wallet-specific)
- Add walletpassphrasechange command (wallet-specific)
- Add walletlock command (wallet-specific)
- Add sendfrom command (wallet-specific)
- Add sendmany command (wallet-specific)
- Add settxfee command (wallet-specific)
- Add listsinceblock command (wallet-specific)
- Add listaccounts command (wallet-specific)
- Add keypoolrefill command (wallet-specific)
- Add getreceivedbyaccount command (wallet-specific)
- Add getrawchangeaddress command (wallet-specific)
- Add gettxoutsetinfo command (wallet-specific)
- Add listaddressgroupings command (wallet-specific)
- Add listlockunspent command (wallet-specific)
- Add listlock command (wallet-specific)
- Add listreceivedbyaccount command (wallet-specific)
- Add validateaddress command (wallet-specific)
- Add verifymessage command (wallet-specific)
- Add sendtoaddress command (wallet-specific)
- Continue cleanup and work on implementing the RPC API:
- Implement submitblock command
(https://github.com/conformal/pod/issues/61)
- Implement help command
- Implement ping command
- Implement getaddednodeinfo command
(https://github.com/conformal/pod/issues/78)
- Implement getinfo command
- Update getpeerinfo to support bytesrecv and bytessent
(https://github.com/conformal/pod/issues/83)
- Improve and correct several RPC server and websocket areas:
- Change the connection endpoint for websockets from /wallet to /ws
(https://github.com/conformal/pod/issues/80)
- Implement an alternative authentication for websockets so clients
such as javascript from browsers that don't support setting HTTP
headers can authenticate (https://github.com/conformal/pod/issues/77)
- Add an authentication deadline for RPC connections
(https://github.com/conformal/pod/issues/68)
- Use standard authentication failure responses for RPC connections
- Make automatically generated certificate more standard so it works
from client such as node.js and Firefox
- Correct some minor issues which could prevent the RPC server from
shutting down in an orderly fashion
- Make all websocket notifications require registration
- Change the data sent over websockets to text since it is JSON-RPC
- Allow connections that do not have an Origin header set
- Expose and track the number of bytes read and written per peer
(https://github.com/conformal/btcwire/issues/6)
- Correct an issue with sendrawtransaction when invoked via websockets
which prevented a minedtx notification from being added
- Rescan operations issued from remote wallets are no stopped when
the wallet disconnects mid-operation
(https://github.com/conformal/pod/issues/66)
- Several optimizations related to fetching block information from the
database
- General code cleanup
Changes in 0.5.0 (Mon Jan 13 2014)
- Optimize initial block download by introducing a new mode which
downloads the block headers first (up to the final checkpoint)
- Improve peer handling to remove the potential for slow peers to cause
sluggishness amongst all peers
(https://github.com/conformal/pod/issues/63)
- Fix an issue where the initial block sync could stall when the sync peer
disconnects (https://github.com/conformal/pod/issues/62)
- Correct an issue where --externalip was doing a DNS lookup on the full
host:port instead of just the host portion
(https://github.com/conformal/pod/issues/38)
- Fix an issue which could lead to a panic on chain switches
(https://github.com/conformal/pod/issues/70)
- Improve podctl utility in the following ways:
- Show getdifficulty output as floating point to 6 digits of precision
- Show all JSON object replies formatted as standard JSON
- Allow podctl getblock to accept optional params
- Add getaccount command (wallet-specific)
- Add getaccountaddress command (wallet-specific)
- Add sendrawtransaction command
- Continue cleanup and work on implementing RPC API calls
- Update getrawmempool to support new optional verbose flag
- Update getrawtransaction to match the reference client
- Update getblock to support new optional verbose flag
- Update raw transactions to fully match the reference client including
support for all transaction types and address types
- Correct getrawmempool fee field to return DUO instead of Satoshi
- Correct getpeerinfo service flag to return 8 digit string so it
matches the reference client
- Correct verifychain to return a boolean
- Implement decoderawtransaction command
- Implement createrawtransaction command
- Implement decodescript command
- Implement gethashespersec command
- Allow RPC handler overrides when invoked via a websocket versus
legacy connection
- Add new DNS seed for peer discovery
- Display user agent on new valid peer log message
(https://github.com/conformal/pod/issues/64)
- Notify wallet when new transactions that pay to registered addresses
show up in the mempool before being mined into a block
- Support a tor-specific proxy in addition to a normal proxy
(https://github.com/conformal/pod/issues/47)
- Remove deprecated sqlite3 imports from utilities
- Remove leftover profile write from addblock utility
- Quite a bit of code cleanup and refactoring to improve maintainability
Changes in 0.4.0 (Thu Dec 12 2013)
- Allow listen interfaces to be specified via --listen instead of only the
port (https://github.com/conformal/pod/issues/33)
- Allow listen interfaces for the RPC server to be specified via
--rpclisten instead of only the port
(https://github.com/conformal/pod/issues/34)
- Only disable listening when --connect or --proxy are used when no
--listen interface are specified
(https://github.com/conformal/pod/issues/10)
- Add several new standard transaction checks to transaction memory pool:
- Support nulldata scripts as standard
- Only allow a max of one nulldata output per transaction
- Enforce a maximum of 3 public keys in multi-signature transactions
- The number of signatures in multi-signature transactions must not
exceed the number of public keys
- The number of inputs to a signature script must match the expected
number of inputs for the script type
- The number of inputs pushed onto the stack by a redeeming signature
script must match the number of inputs consumed by the referenced
public key script
- When a block is connected, remove any transactions from the memory pool
which are now double spends as a result of the newly connected
transactions
- Don't relay transactions resurrected during a chain switch since
other peers will also be switching chains and therefore already know
about them
- Cleanup a few cases where rejected transactions showed as an error
rather than as a rejected transaction
- Ignore the default configuration file when --regtest (regression test
mode) is specified
- Implement TLS support for RPC including automatic certificate generation
- Support HTTP authentication headers for web sockets
- Update address manager to recognize and properly work with Tor
addresses (https://github.com/conformal/pod/issues/36) and
(https://github.com/conformal/pod/issues/37)
- Improve podctl utility in the following ways:
- Add the ability to specify a configuration file
- Add a default entry for the RPC cert to point to the location
it will likely be in the pod home directory
- Implement --version flag
- Provide a --notls option to support non-TLS configurations
- Fix a couple of minor races found by the Go race detector
- Improve logging
- Allow logging level to be specified on a per subsystem basis
(https://github.com/conformal/pod/issues/48)
- Allow logging levels to be dynamically changed via RPC
(https://github.com/conformal/pod/issues/15)
- Implement a rolling log file with a max of 10MB per file and a
rotation size of 3 which results in a max logging size of 30 MB
- Correct a minor issue with the rescanning websocket call
(https://github.com/conformal/pod/issues/54)
- Fix a race with pushing address messages that could lead to a panic
(https://github.com/conformal/pod/issues/58)
- Improve which external IP address is reported to peers based on which
interface they are connected through
(https://github.com/conformal/pod/issues/35)
- Add --externalip option to allow an external IP address to be specified
for cases such as tor hidden services or advanced network configurations
(https://github.com/conformal/pod/issues/38)
- Add --upnp option to support automatic port mapping via UPnP
(https://github.com/conformal/pod/issues/51)
- Update Ctrl+C interrupt handler to properly sync address manager and
remove the UPnP port mapping (if needed)
- Continue cleanup and work on implementing RPC API calls
- Add importprivkey (import private key) command to podctl
- Update getrawtransaction to provide addresses properly, support
new verbose param, and match the reference implementation with the
exception of MULTISIG (thanks @flammit)
- Update getblock with new verbose flag (thanks @flammit)
- Add listtransactions command to podctl
- Add getbalance command to podctl
- Add basic support for pod to run as a native Windows service
(https://github.com/conformal/pod/issues/42)
- Package addblock utility with Windows MSIs
- Add support for TravisCI (continuous build integration)
- Cleanup some documentation and usage
- Several other minor bug fixes and general code cleanup
Changes in 0.3.3 (Wed Nov 13 2013)
- Significantly improve initial block chain download speed
(https://github.com/conformal/pod/issues/20)
- Add a new checkpoint at block height 267300
- Optimize most recently used inventory handling
(https://github.com/conformal/pod/issues/21)
- Optimize duplicate transaction input check
(https://github.com/conformal/btcchain/issues/2)
- Optimize transaction hashing
(https://github.com/conformal/pod/issues/25)
- Rework and optimize wallet listener notifications
(https://github.com/conformal/pod/issues/22)
- Optimize serialization and deserialization
(https://github.com/conformal/pod/issues/27)
- Add support for minimum transaction fee to memory pool acceptance
(https://github.com/conformal/pod/issues/29)
- Improve leveldb database performance by removing explicit GC call
- Fix an issue where Ctrl+C was not always finishing orderly database
shutdown
- Fix an issue in the script handling for OP_CHECKSIG
- Impose max limits on all variable length protocol entries to prevent
abuse from malicious peers
- Enforce DER signatures for transactions allowed into the memory pool
- Separate the debug profile http server from the RPC server
- Rework of the RPC code to improve performance and make the code cleaner
- The getrawtransaction RPC call now properly checks the memory pool
before consulting the db (https://github.com/conformal/pod/issues/26)
- Add support for the following RPC calls: getpeerinfo, getconnectedcount,
addnode, verifychain
(https://github.com/conformal/pod/issues/13)
(https://github.com/conformal/pod/issues/17)
- Implement rescan websocket extension to allow wallet rescans
- Use correct paths for application data storage for all supported
operating systems (https://github.com/conformal/pod/issues/30)
- Add a default redirect to the http profiling page when accessing the
http profile server
- Add a new --cpuprofile option which can be used to generate CPU
profiling data on platforms that support it
- Several other minor performance optimizations
- Other minor bug fixes and general code cleanup
Changes in 0.3.2 (Tue Oct 22 2013)
- Fix an issue that could cause the download of the block chain to stall
(https://github.com/conformal/pod/issues/12)
- Remove deprecated sqlite as an available database backend
- Close sqlite compile issue as sqlite has now been removed
(https://github.com/conformal/pod/issues/11)
- Change default RPC ports to 11048 (mainnet) and 21048 (testnet)
- Continue cleanup and work on implementing RPC API calls
- Add support for the following RPC calls: getrawmempool,
getbestblockhash, decoderawtransaction, getdifficulty,
getconnectioncount, getpeerinfo, and addnode
- Improve the podctl utility that is used to issue JSON-RPC commands
- Fix an issue preventing pod from cleanly shutting down with the RPC
stop command
- Add a number of database interface tests to ensure backends implement
the expected interface
- Expose some additional information from btcscript to be used for
identifying "standard"" transactions
- Add support for plan9 - thanks @mischief
(https://github.com/conformal/pod/pull/19)
- Other minor bug fixes and general code cleanup
Changes in 0.3.1-alpha (Tue Oct 15 2013)
- Change default database to leveldb
NOTE: This does mean you will have to redownload the block chain. Since we
are still in alpha, we didn't feel writing a converter was worth the time as
it would take away from more important issues at this stage
- Add a warning if there are multiple block chain databases of different types
- Fix issue with unexpected EOF in leveldb -- https://github.com/conformal/pod/issues/18
- Fix issue preventing block 21066 on testnet -- https://github.com/conformal/btcchain/issues/1
- Fix issue preventing block 96464 on testnet -- https://github.com/conformal/btcscript/issues/1
- Optimize transaction lookups
- Correct a few cases of list removal that could result in improper cleanup
of no longer needed orphans
- Add functionality to increase ulimits on non-Windows platforms
- Add support for mempool command which allows remote peers to query the
transaction memory pool via the bitcoin protocol
- Clean up logging a bit
- Add a flag to disable checkpoints for developers
- Add a lot of useful debug logging such as message summaries
- Other minor bug fixes and general code cleanup
Initial Release 0.3.0-alpha (Sat Oct 05 2013):
- Initial release

14
cmd/node/LICENSE Executable file
View File

@@ -0,0 +1,14 @@
ISC License
Copyright (c) 2018- The Parallelcoin Team
Copyright (c) 2013-2017 The btcsuite developers
Copyright (c) 2015-2016 The Decred developers
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

164
cmd/node/README.md Executable file
View File

@@ -0,0 +1,164 @@
![](https://gitlab.com/parallelcoin/node/raw/master/assets/logo.png)
# The Parallelcoin Node [![ISC License](http://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org) [![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](http://godoc.org/github.com/p9c/p9/node)
Next generation full node for Parallelcoin, forked
from [btcd](https://github.com/btcsuite/btcd)
## Hard Fork 1: Plan 9 from Crypto Space
**TODO:** update this!
9 algorithms can be used when mining:
- Blake14lr (decred)
- ~~Skein (myriadcoin)~~ Cryptonote7v2
- Lyra2REv2 (sia)
- Keccac (maxcoin, smartcash)
- Scrypt (litecoin)
- SHA256D (bitcoin)
- GOST Stribog \*
- Skein
- X11 (dash)
### Stochastic Binomial Filter Difficulty Adjustment
After the upcoming hardfork, Parallelcoin will have the following features in
its difficulty adjustment regime:
- Exponential curve with power of 3 to respond gently the natural drift while
moving the difficulty fast in below 10% of target and 10x target, to deal with
recovering after a large increase in network hashpower
- 293 second blocks (7 seconds less than 5 minutes), 1439 block averaging
window (about 4.8 days) that is varied by interpreting byte 0 of the sha256d
hash of newest block hash as a signed 8 bit integer to further disturb any
inherent rhythm (like dithering).
- Difficulty adjustments are based on a window ending at the previous block of
each algorithm, meaning sharp rises from one algorithm do not immediately
affect the other algorithms, allowing a smoother recovery from a sudden drop
in hashrate, soaking up energetic movements more robustly and resiliently, and
reducing vulnerability to time distortion attacks.
- Deterministic noise is added to the difficulty adjustment in a similar way as
is done with digital audio and images to improve the effective resolution of
the signal by reducing unwanted artifacts caused by the sampling process.
Miners are random generators, and a block time is like a tuning filter, so the
same principles apply.
- Rewards will be computed according to a much smoother, satoshi-precision
exponential decay curve that will produce a flat annual 5% supply expansion.
Increasing the precision of the denomination is planned for the next release
cycle, at 0.00000001 as the minimum denomination, there may be issues as
userbase increases.
- Fair Hardfork - Rewards will slowly rise from the initial hard fork at an
inverse exponential rate to bring the block reward from 0.02 up to 2 in 2000
blocks, as the adjustment to network capacity takes time, so rewards will
closely match the time interval they relate to until it starts faster from the
minimum target stabilises in response to what miners create.
## Installation
For the main full node server:
```bash
go get github.com/parallelcointeam/parallelcoin
```
You probably will also want CLI client (can also speak to other bitcoin protocol
RPC endpoints also):
```bash
go get github.com/p9c/p9/cmd/podctl
```
## Requirements
[Go](http://golang.org) 1.11 or newer.
## Installation
#### Windows not available yet
When it is, it will be available here:
https://github.com/p9c/p9/releases
#### Linux/BSD/MacOSX/POSIX - Build from Source
- Install Go according to
the [installation instructions](http://golang.org/doc/install)
- Ensure Go was installed properly and is a supported version:
```bash
$ go version
$ go env GOROOT GOPATH
```
NOTE: The `GOROOT` and `GOPATH` above must not be the same path. It is
recommended that `GOPATH` is set to a directory in your home directory such
as `~/goprojects` to avoid write permission issues. It is also recommended to
add `$GOPATH/bin` to your `PATH` at this point.
- Run the following commands to obtain pod, all dependencies, and install it:
```bash
$ go get github.com/parallelcointeam/parallelcoin
```
- pod (and utilities) will now be installed in `$GOPATH/bin`. If you did not
already add the bin directory to your system path during Go installation, we
recommend you do so now.
## Updating
#### Windows
Install a newer MSI
#### Linux/BSD/MacOSX/POSIX - Build from Source
- Run the following commands to update pod, all dependencies, and install it:
```bash
$ cd $GOPATH/src/github.com/parallelcointeam/parallelcoin
$ git pull && glide install
$ go install . ./cmd/...
```
## Getting Started
pod has several configuration options available to tweak how it runs, but all of
the basic operations described in the intro section work with zero
configuration.
#### Windows (Installed from MSI)
Launch pod from your Start menu.
#### Linux/BSD/POSIX/Source
```bash
$ ./pod
```
## Discord
Come and chat at our (discord server](https://discord.gg/nJKts94)
## Issue Tracker
The [integrated github issue tracker](https://github.com/p9c/p9/issues)
is used for this project.
## Documentation
The documentation is a work-in-progress. It is located in
the [docs](https://github.com/p9c/p9/tree/master/docs)
folder.
## License
pod is licensed under the [copyfree](http://copyfree.org) ISC License.

28
cmd/node/active/config.go Normal file
View File

@@ -0,0 +1,28 @@
package active
import (
"net"
"time"
"github.com/p9c/p9/pkg/amt"
"github.com/p9c/p9/pkg/btcaddr"
"github.com/p9c/p9/pkg/connmgr"
"github.com/p9c/p9/pkg/chaincfg"
)
// Config stores current state of the node
type Config struct {
Lookup connmgr.LookupFunc
Oniondial func(string, string, time.Duration) (net.Conn, error)
Dial func(string, string, time.Duration) (net.Conn, error)
AddedCheckpoints []chaincfg.Checkpoint
ActiveMiningAddrs []btcaddr.Address
ActiveMinerKey []byte
ActiveMinRelayTxFee amt.Amount
ActiveWhitelists []*net.IPNet
DropAddrIndex bool
DropTxIndex bool
DropCfIndex bool
Save bool
}

326
cmd/node/docs/README.md Executable file
View File

@@ -0,0 +1,326 @@
### Table of Contents
1. [About](#About)
2. [Getting Started](#GettingStarted)
1. [Installation](#Installation)
1. [Windows](#WindowsInstallation)
2. [Linux/BSD/MacOSX/POSIX](#PosixInstallation)
3. [Gentoo Linux](#GentooInstallation)
2. [Configuration](#Configuration)
3. [Controlling and Querying pod via podctl](#BtcctlConfig)
4. [Mining](#Mining)
3. [Help](#Help)
1. [Startup](#Startup)
1. [Using bootstrap.dat](#BootstrapDat)
2. [Network Configuration](#NetworkConfig)
3. [Wallet](#Wallet)
4. [Contact](#Contact)
1. [IRC](#ContactIRC)
2. [Mailing Lists](#MailingLists)
5. [Developer Resources](#DeveloperResources)
1. [Code Contribution Guidelines](#ContributionGuidelines)
2. [JSON-RPC Reference](#JSONRPCReference)
3. [The btcsuite Bitcoin-related Go Packages](#GoPackages)
<a name="About" />
### 1. About
pod is a full node bitcoin implementation written in [Go](http://golang.org), licensed under
the [copyfree](http://www.copyfree.org) ISC License.
This project is currently under active development and is in a Beta state. It is extremely stable and has been in
production use since October 2013, but
It properly downloads, validates, and serves the block chain using the exact rules (including consensus bugs) for block
acceptance as Bitcoin Core. We have taken great care to avoid pod causing a fork to the block chain. It includes a full
block validation testing framework which contains all of the 'official' block acceptance tests (and some additional
ones) that is run on every pull request to help ensure it properly follows consensus. Also, it passes all of the JSON
test data in the Bitcoin Core code.
It also properly relays newly mined blocks, maintains a transaction pool, and relays individual transactions that have
not yet made it into a block. It ensures all individual transactions admitted to the pool follow the rules required by
the block chain and also includes more strict checks which filter transactions based on miner requirements ("standard"
transactions).
One key difference between pod and Bitcoin Core is that pod does _NOT_ include wallet functionality and this was a very
intentional design decision. See the blog entry [here](https://blog.conformal.com/pod-not-your-moms-bitcoin-daemon) for
more details. This means you can't actually make or receive payments directly with pod. That functionality is provided
by the [btcwallet](https://github.com/p9c/p9/walletmain).
<a name="GettingStarted" />
### 2. Getting Started
<a name="Installation" />
**2.1 Installation**
The first step is to install pod. See one of the following sections for details on how to install on the supported
operating systems.
<a name="WindowsInstallation" />
**2.1.1 Windows Installation**<br />
- Install the MSI available at: https://github.com/p9c/p9/releases
- Launch pod from the Start Menu
<a name="PosixInstallation" />
**2.1.2 Linux/BSD/MacOSX/POSIX Installation**
- Install Go according to the installation instructions here: http://golang.org/doc/install
- Ensure Go was installed properly and is a supported version:
```bash
$ go version
$ go env GOROOT GOPATH
```
NOTE: The `GOROOT` and `GOPATH` above must not be the same path. It is recommended that `GOPATH` is set to a directory
in your home directory such as `~/goprojects` to avoid write permission issues. It is also recommended to
add `$GOPATH/bin` to your `PATH` at this point.
- Run the following commands to obtain pod, all dependencies, and install it:
```bash
$ go get -u github.com/Masterminds/glide
$ git clone https://github.com/parallelcointeam/parallelcoin $GOPATH/src/github.com/parallelcointeam/parallelcoin
$ cd $GOPATH/src/github.com/parallelcointeam/parallelcoin
$ glide install
$ go install . ./cmd/...
```
- pod (and utilities) will now be installed in `$GOPATH/bin`. If you did not already add the bin directory to your
system path during Go installation, we recommend you do so now.
**Updating**
- Run the following commands to update pod, all dependencies, and install it:
```bash
$ cd $GOPATH/src/github.com/parallelcointeam/parallelcoin
$ git pull && glide install
$ go install . ./cmd/...
```
<a name="GentooInstallation" />
**2.1.2.1 Gentoo Linux Installation**
- Install Layman and enable the Bitcoin overlay.
- https://gitlab.com/bitcoin/gentoo
- Copy or symlink `/var/lib/layman/bitcoin/Documentation/package.keywords/pod-live` to `/etc/portage/package.keywords/`
- Install pod: `$ emerge net-p2p/pod`
<a name="Configuration" />
**2.2 Configuration**
pod has a number of [configuration](http://godoc.org/github.com/parallelcointeam/parallelcoin) options, which can be
viewed by running: `$ pod --help`.
<a name="BtcctlConfig" />
**2.3 Controlling and Querying pod via podctl**
podctl is a command line utility that can be used to both control and query pod
via [RPC](http://www.wikipedia.org/wiki/Remote_procedure_call). pod does **not** enable its RPC server by default; You
must configure at minimum both an RPC username and password or both an RPC limited username and password:
- pod.conf configuration file
```
[Application Options]
rpcuser=myuser
rpcpass=SomeDecentp4ssw0rd
rpclimituser=mylimituser
rpclimitpass=Limitedp4ssw0rd
```
- podctl.conf configuration file
```
[Application Options]
rpcuser=myuser
rpcpass=SomeDecentp4ssw0rd
```
OR
```
[Application Options]
rpclimituser=mylimituser
rpclimitpass=Limitedp4ssw0rd
```
For a list of available options, run: `$ podctl --help`
<a name="Mining" />
**2.4 Mining**
pod supports the `getblocktemplate` RPC. The limited user cannot access this RPC.
**1. Add the payment addresses with the `miningaddr` option.**
```
[Application Options]
rpcuser=myuser
rpcpass=SomeDecentp4ssw0rd
miningaddr=12c6DSiU4Rq3P4ZxziKxzrL5LmMBrzjrJX
miningaddr=1M83ju3EChKYyysmM2FXtLNftbacagd8FR
```
**2. Add pod's RPC TLS certificate to system Certificate Authority list.**
`cgminer` uses [curl](http://curl.haxx.se/) to fetch data from the RPC server. Since curl validates the certificate by
default, we must install the `pod` RPC certificate into the default system Certificate Authority list.
**Ubuntu**
1. Copy rpc.cert to /usr/share/ca-certificates: `# cp /home/user/.pod/rpc.cert /usr/share/ca-certificates/pod.crt`
2. Add pod.crt to /etc/ca-certificates.conf: `# echo pod.crt >> /etc/ca-certificates.conf`
3. Update the CA certificate list: `# update-ca-certificates`
**3. Set your mining software url to use https.**
`$ cgminer -o https://127.0.0.1:11048 -u rpcuser -p rpcpassword`
<a name="Help" />
### 3. Help
<a name="Startup" />
**3.1 Startup**
Typically pod will run and start downloading the block chain with no extra configuration necessary, however, there is an
optional method to use a `bootstrap.dat` file that may speed up the initial block chain download process.
<a name="BootstrapDat" />
**3.1.1 bootstrap.dat**
- [Using bootstrap.dat](https://github.com/p9c/p9/tree/master/docs/using_bootstrap_dat.md)
<a name="NetworkConfig" />
**3.1.2 Network Configuration**
- [What Ports Are Used by Default?](https://github.com/p9c/p9/tree/master/docs/default_ports.md)
- [How To Listen on Specific Interfaces](https://github.com/p9c/p9/tree/master/docs/configure_peer_server_listen_interfaces.md)
- [How To Configure RPC Server to Listen on Specific Interfaces](https://github.com/p9c/p9/tree/master/docs/configure_rpc_server_listen_interfaces.md)
- [Configuring pod with Tor](https://github.com/p9c/p9/tree/master/docs/configuring_tor.md)
<a name="Wallet" />
**3.1 Wallet**
pod was intentionally developed without an integrated wallet for security reasons. Please
see [btcwallet](https://github.com/btcsuite/btcwallet) for more information.
<a name="Contact" />
### 4. Contact
<a name="ContactIRC" />
**4.1 IRC**
- [irc.freenode.net](irc://irc.freenode.net), channel `#pod`
<a name="MailingLists" />
**4.2 Mailing Lists**
- <a href="mailto:pod+subscribe@opensource.conformal.com">pod</a>: discussion of pod and its packages.
- <a href="mailto:pod-commits+subscribe@opensource.conformal.com">pod-commits</a>:
readonly mail-out of source code changes.
<a name="DeveloperResources" />
### 5. Developer Resources
<a name="ContributionGuidelines" />
- [Code Contribution Guidelines](https://github.com/p9c/p9/tree/master/docs/code_contribution_guidelines.md)
<a name="JSONRPCReference" />
- [JSON-RPC Reference](https://github.com/p9c/p9/tree/master/docs/json_rpc_api.md)
- [RPC Examples](https://github.com/p9c/p9/tree/master/docs/json_rpc_api.md#ExampleCode)
<a name="GoPackages" />
- The btcsuite Bitcoin-related Go Packages:
- [btcrpcclient](https://github.com/p9c/p9/tree/master/rpcclient) - Implements a robust and easy to use
Websocket-enabled Bitcoin JSON-RPC client
- [btcjson](https://github.com/p9c/p9/tree/master/btcjson) - Provides an extensive API for the underlying JSON-RPC
command and return values
- [wire](https://github.com/p9c/p9/tree/master/wire) - Implements the Bitcoin wire protocol
- [peer](https://github.com/p9c/p9/tree/master/peer) - Provides a common base for creating and managing Bitcoin
network peers.
- [blockchain](https://github.com/p9c/p9/tree/master/blockchain) - Implements Bitcoin block handling and chain
selection rules
- [blockchain/fullblocktests](https://github.com/p9c/p9/tree/master/blockchain/fullblocktests) - Provides a set of
block tests for testing the consensus validation rules
- [txscript](https://github.com/p9c/p9/tree/master/txscript) - Implements the Bitcoin transaction scripting
language
- [btcec](https://github.com/p9c/p9/tree/master/btcec) - Implements support for the elliptic curve cryptographic
functions needed for the Bitcoin scripts
- [database](https://github.com/p9c/p9/tree/master/database) - Provides a database interface for the Bitcoin block
chain
- [mempool](https://github.com/p9c/p9/tree/master/mempool) - Package mempool provides a policy-enforced pool of
unmined bitcoin transactions.
- [util](https://github.com/p9c/p9/util) - Provides Bitcoin-specific convenience functions and types
- [chainhash](https://github.com/p9c/p9/tree/master/chaincfg/chainhash) - Provides a generic hash type and
associated functions that allows the specific hash algorithm to be abstracted.
- [connmgr](https://github.com/p9c/p9/tree/master/connmgr) - Package connmgr implements a generic Bitcoin network
connection manager.

View File

@@ -0,0 +1,308 @@
### Table of Contents
1. [Overview](#Overview)<br />
2. [Minimum Recommended Skillset](#MinSkillset)<br />
3. [Required Reading](#ReqReading)<br />
4. [Development Practices](#DevelopmentPractices)<br />
4.1. [Share Early, Share Often](#ShareEarly)<br />
4.2. [Testing](#Testing)<br />
4.3. [Code Documentation and Commenting](#CodeDocumentation)<br />
4.4. [Model Git Commit Messages](#ModelGitCommitMessages)<br />
5. [Code Approval Process](#CodeApproval)<br />
5.1 [Code Review](#CodeReview)<br />
5.2 [Rework Code (if needed)](#CodeRework)<br />
5.3 [Acceptance](#CodeAcceptance)<br />
6. [Contribution Standards](#Standards)<br />
6.1. [Contribution Checklist](#Checklist)<br />
6.2. [Licensing of Contributions](#Licensing)<br />
<a name="Overview"></a>
### 1. Overview
Developing cryptocurrencies is an exciting endeavor that touches a wide variety of areas such as wire protocols,
peer-to-peer networking, databases, cryptography, language interpretation (transaction scripts), RPC, and websockets.
They also represent a radical shift to the current fiscal system and as a result provide an opportunity to help reshape
the entire financial system. There are few projects that offer this level of diversity and impact all in one code base.
However, as exciting as it is, one must keep in mind that cryptocurrencies represent real money and introducing bugs and
security vulnerabilities can have far more dire consequences than in typical projects where having a small bug is
minimal by comparison. In the world of cryptocurrencies, even the smallest bug in the wrong area can cost people a
significant amount of money. For this reason, the pod suite has a formalized and rigorous development process which is
outlined on this page.
We highly encourage code contributions, however it is imperative that you adhere to the guidelines established on this
page.
<a name="MinSkillset"></a>
### 2. Minimum Recommended Skillset
The following list is a set of core competencies that we recommend you possess before you really start attempting to
contribute code to the project. These are not hard requirements as we will gladly accept code contributions as long as
they follow the guidelines set forth on this page. That said, if you don't have the following basic qualifications you
will likely find it quite difficult to contribute.
- A reasonable understanding of bitcoin at a high level (see the [Required Reading](#ReqReading) section for the
original white paper)
- Experience in some type of C-like language. Go is preferable of course.
- An understanding of data structures and their performance implications
- Familiarity with unit testing
- Debugging experience
- Ability to understand not only the area you are making a change in, but also the code your change relies on, and the
code which relies on your changed code
Building on top of those core competencies, the recommended skill set largely depends on the specific areas you are
looking to contribute to. For example, if you wish to contribute to the cryptography code, you should have a good
understanding of the various aspects involved with cryptography such as the security and performance implications.
<a name="ReqReading"></a>
### 3. Required Reading
- [Effective Go](http://golang.org/doc/effective_go.html) - The entire pod suite follows the guidelines in this
document. For your code to be accepted, it must follow the guidelines therein.
- [Original Satoshi Whitepaper](http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&ved=0CCkQFjAA&url=http%3A%2F%2Fbitcoin.org%2Fbitcoin.pdf&ei=os3VUuH8G4SlsASV74GoAg&usg=AFQjCNEipPLigou_1MfB7DQjXCNdlylrBg&sig2=FaHDuT5z36GMWDEnybDJLg&bvm=bv.59378465,d.b2I)
- This is the white paper that started it all. Having a solid foundation to build on will make the code much more
comprehensible.
<a name="DevelopmentPractices"></a>
### 4. Development Practices
Developers are expected to work in their own trees and submit pull requests when they feel their feature or bug fix is
ready for integration into the master branch.
<a name="ShareEarly"></a>
### 4.1 Share Early, Share Often
We firmly believe in the share early, share often approach. The basic premise of the approach is to announce your
plans **before** you start work, and once you have started working, craft your changes into a stream of small and easily
reviewable commits.
This approach has several benefits:
- Announcing your plans to work on a feature **before** you begin work avoids duplicate work
- It permits discussions which can help you achieve your goals in a way that is consistent with the existing
architecture
- It minimizes the chances of you spending time and energy on a change that might not fit with the consensus of the
community or existing architecture and potentially be rejected as a result
- Incremental development helps ensure you are on the right track with regards to the rest of the community
- The quicker your changes are merged to master, the less time you will need to spend rebasing and otherwise trying to
keep up with the main code base
<a name="Testing"></a>
### 4.2 Testing
One of the major design goals of all core pod packages is to aim for complete test coverage. This is financial software
so bugs and regressions can cost people real money. For this reason every effort must be taken to ensure the code is as
accurate and bug-free as possible. Thorough testing is a good way to help achieve that goal.
Unless a new feature you submit is completely trivial, it will probably be rejected unless it is also accompanied by
adequate test coverage for both positive and negative conditions. That is to say, the tests must ensure your code works
correctly when it is fed correct data as well as incorrect data (error paths).
Go provides an excellent test framework that makes writing test code and checking coverage statistics straight forward.
For more information about the test coverage tools, see the [golang cover blog post](http://blog.golang.org/cover).
A quick summary of test practices follows:
- All new code should be accompanied by tests that ensure the code behaves correctly when given expected values, and,
perhaps even more importantly, that it handles errors gracefully
- When you fix a bug, it should be accompanied by tests which exercise the bug to both prove it has been resolved and to
prevent future regressions
<a name="CodeDocumentation"></a>
### 4.3 Code Documentation and Commenting
Comments have a way of turning into lies during development, you should not expect readers to depend on it. Much more
important is that names are meaningful, they do not take up excessive space, and comments are only necessary when the
meaning of the code needs clarification.
- At a minimum every function must be commented with its intended purpose and any assumptions that it makes
- Function comments must always begin with the name of the function
per [Effective Go](http://golang.org/doc/effective_go.html)
- Function comments should be complete sentences since they allow a wide variety of automated presentations such
as [godoc.org](https://godoc.org)
- Comments should be brief, function type signatures should be informative enough that the comment is for
clarification. Comments are not tested by the compiler, and can obscure the intent of the code if the code is
opaque in its semantics.
- Comments will be parsed by godoc and excess vertical space usage reduces the readability of code, so there is no
sane reason why the comments (and indeed, in documents such as this) should be manually split into lines. That's
what word wrap is for.
- The general rule of thumb is to look at it as if you were completely unfamiliar with the code and ask yourself,
would this give me enough information to understand what this function does and how I'd probably want to use it?
- Detailed information in comments should be mainly in the type definitions. Meaningful names in function parameters
are more important than silly long complicated comments and make the code harder to read where the clarity is most
needed.
- If you need to write a lot of comments about code you probably have not written it well.
- Variable and constant names should be informative, and where obvious, brief.
- If the function signature is longer than 80 characters in total, you should change the parameters to be a
structured variable, the structure will explain the parameters better and more visually attractive than a function
call with more than 5 parameters.
- If you use a constant value more than a few times, and especially, in more than a few source files, you should
give it a meaningful name and place it into separate folders that allows you to avoid circular dependencies. It is
better to define a type independently, and create an alias in the implementation as methods must be defined
locally to the type. However, if you need to access the fields of the type, the definition needs to be isolated
separate from the implementation, otherwise you almost certainly will run into a circular dependency that will
block compilation.
- The best place for detailed information is a separate `doc.go` file, where the comment that appears before the
package name appears at the very top in the Godoc output, and in the structure and type definitions for exported
types. Functions should not be the place to put this, as it interferes with readability, and scatters the
information, at the same time.
<a name="ModelGitCommitMessages"></a>
### 4.4 Model Git Commit Messages
This project prefers to keep a clean commit history with well-formed commit messages. This section illustrates a model
commit message and provides a bit of background for it. This content was originally created by Tim Pope and made
available on his website, however that website is no longer active, so it is being provided here.
Heres a model Git commit message:
```
Short (50 chars or less) summary of changes
More detailed explanatory text, if necessary. Wrap it to about 72
characters or so. In some contexts, the first line is treated as the
subject of an email and the rest of the text as the body. The blank
line separating the summary from the body is critical (unless you omit
the body entirely); tools like rebase can get confused if you run the
two together.
Write your commit message in the present tense: "Fix bug" and not "Fixed
bug." This convention matches up with commit messages generated by
commands like git merge and git revert.
Further paragraphs come after blank lines.
- Bullet points are okay, too
- Typically a hyphen or asterisk is used for the bullet, preceded by a
single space, with blank lines in between, but conventions vary here
- Use a hanging indent
```
Prefix the summary with the subsystem/package when possible. Many other projects make use of the code and this makes it
easier for them to tell when something they're using has changed. Have a look
at [past commits](https://github.com/p9c/p9/commits/master) for examples of commit messages.
<a name="CodeApproval"></a>
### 5. Code Approval Process
This section describes the code approval process that is used for code contributions. This is how to get your changes
into pod.
<a name="CodeReview"></a>
### 5.1 Code Review
All code which is submitted will need to be reviewed before inclusion into the master branch. This process is performed
by the project maintainers and usually other committers who are interested in the area you are working in as well.
##### Code Review Timeframe
The timeframe for a code review will vary greatly depending on factors such as the number of other pull requests which
need to be reviewed, the size and complexity of the contribution, how well you followed the guidelines presented on this
page, and how easy it is for the reviewers to digest your commits. For example, if you make one monolithic commit that
makes sweeping changes to things in multiple subsystems, it will obviously take much longer to review. You will also
likely be asked to split the commit into several smaller, and hence more manageable, commits.
Keeping the above in mind, most small changes will be reviewed within a few days, while large or far reaching changes
may take weeks. This is a good reason to stick with the [Share Early, Share Often](#ShareOften) development practice
outlined above.
##### What is the review looking for?
The review is mainly ensuring the code follows the [Development Practices](#DevelopmentPractices)
and [Code Contribution Standards](#Standards). However, there are a few other checks which are generally performed as
follows:
- The code is stable and has no stability or security concerns
- The code is properly using existing APIs and generally fits well into the overall architecture
- The change is not something which is deemed inappropriate by community consensus
<a name="CodeRework"></a>
### 5.2 Rework Code (if needed)
After the code review, the change will be accepted immediately if no issues are found. If there are any concerns or
questions, you will be provided with feedback along with the next steps needed to get your contribution merged with
master. In certain cases the code reviewer(s) or interested committers may help you rework the code, but generally you
will simply be given feedback for you to make the necessary changes.
This process will continue until the code is finally accepted.
<a name="CodeAcceptance"></a>
### 5.3 Acceptance
Once your code is accepted, it will be integrated with the master branch. Typically it will be rebased and fast-forward
merged to master as we prefer to keep a clean commit history over a tangled weave of merge commits. However,regardless
of the specific merge method used, the code will be integrated with the master branch and the pull request will be
closed.
Rejoice as you will now be listed as a [contributor](https://github.com/p9c/p9/graphs/contributors)!
<a name="Standards"></a>
### 6. Contribution Standards
<a name="Checklist"></a>
### 6.1. Contribution Checklist
- [&nbsp;&nbsp;] All changes are Go version 1.11 compliant
- [&nbsp;&nbsp;] The code being submitted is commented according to
the [Code Documentation and Commenting](#CodeDocumentation) section
- [&nbsp;&nbsp;] For new code: Code is accompanied by tests which exercise both the positive and negative (error paths)
conditions (if applicable)
- [&nbsp;&nbsp;] For bug fixes: Code is accompanied by new tests which trigger the bug being fixed to prevent
regressions
- [&nbsp;&nbsp;] Any new logging statements use an appropriate subsystem and logging level
- [&nbsp;&nbsp;] Code has been formatted with `go fmt`
- [&nbsp;&nbsp;] Running `go test` does not fail any tests
- [&nbsp;&nbsp;] Running `go vet` does not report any issues
- [&nbsp;&nbsp;] Running [golint](https://github.com/golang/lint) does not report any **new** issues that did not
already exist
<a name="Licensing"></a>
### 6.2. Licensing of Contributions
All contributions must be licensed with the [ISC license](https://github.com/p9c/p9/blob/master/LICENSE). This is the
same license as all of the code in the pod suite.

View File

@@ -0,0 +1,36 @@
pod allows you to bind to specific interfaces which enables you to setup configurations with varying levels of
complexity. The listen parameter can be specified on the command line as shown below with the -- prefix or in the
configuration file without the -- prefix (as can all long command line options).
The configuration file takes one entry per line.
**NOTE:** The listen flag can be specified multiple times to listen on multiple interfaces as a couple of the examples
below illustrate.
Command Line Examples:
| Flags | Comment |
| -------------------------------------------- | -------------------------------------------------------------------------------------------- |
| --listen= | all interfaces on default port which is changed by `--testnet` and `--regtest` (**
default**) |
| --listen=0.0.0.0 | all IPv4 interfaces on default port which is changed by `--testnet` and `--regtest` |
| --listen=:: | all IPv6 interfaces on default port which is changed by `--testnet` and `--regtest` |
| --listen=:11047 | all interfaces on port 11047 |
| --listen=0.0.0.0:11047 | all IPv4 interfaces on port 11047 |
| --listen=[::]:11047 | all IPv6 interfaces on port 11047 |
| --listen=127.0.0.1:11047 | only IPv4 localhost on port 11047 |
| --listen=[::1]:11047 | only IPv6 localhost on port 11047 |
| --listen=:8336 | all interfaces on non-standard port 8336 |
| --listen=0.0.0.0:8336 | all IPv4 interfaces on non-standard port 8336 |
| --listen=[::]:8336 | all IPv6 interfaces on non-standard port 8336 |
| --listen=127.0.0.1:8337 --listen=[::1]:11047 | IPv4 localhost on port 8337 and IPv6 localhost on port 11047 |
| --listen=:11047 --listen=:8337 | all interfaces on ports 11047 and 8337 |
The following config file would configure pod to only listen on localhost for both IPv4 and IPv6:
```text
[Application Options]
listen=127.0.0.1:11047
listen=[::1]:11047
```

View File

@@ -0,0 +1,51 @@
pod allows you to bind the RPC server to specific interfaces which enables you to setup configurations with varying
levels of complexity. The `rpclisten` parameter can be specified on the command line as shown below with the -- prefix
or in the configuration file without the -- prefix (as can all long command line options).
The configuration file takes one entry per line.
A few things to note regarding the RPC server:
- The RPC server will **not** be enabled unless the `rpcuser` and `rpcpass` options are specified.
- When the `rpcuser` and `rpcpass` and/or `rpclimituser` and `rpclimitpass` options are specified, the RPC server will
only listen on localhost IPv4 and IPv6 interfaces by default. You will need to override the RPC listen interfaces to
include external interfaces if you want to connect from a remote machine.
- The RPC server has TLS disabled by default. You may use the `--TLS` option to enable it.
- The `--rpclisten` flag can be specified multiple times to listen on multiple interfaces as a couple of the examples
below illustrate.
- The RPC server is disabled by default when using the `--regtest` and
`--simnet` networks. You can override this by specifying listen interfaces.
Command Line Examples:
| Flags | Comment |
| ----------------------------------------------- | ------------------------------------------------------------------- |
| --rpclisten= | all interfaces on default port which is changed by `--testnet` |
| --rpclisten=0.0.0.0 | all IPv4 interfaces on default port which is changed by `--testnet` |
| --rpclisten=:: | all IPv6 interfaces on default port which is changed by `--testnet` |
| --rpclisten=:11048 | all interfaces on port 11048 |
| --rpclisten=0.0.0.0:11048 | all IPv4 interfaces on port 11048 |
| --rpclisten=[::]:11048 | all IPv6 interfaces on port 11048 |
| --rpclisten=127.0.0.1:11048 | only IPv4 localhost on port 11048 |
| --rpclisten=[::1]:11048 | only IPv6 localhost on port 11048 |
| --rpclisten=:8336 | all interfaces on non-standard port 8336 |
| --rpclisten=0.0.0.0:8336 | all IPv4 interfaces on non-standard port 8336 |
| --rpclisten=[::]:8336 | all IPv6 interfaces on non-standard port 8336 |
| --rpclisten=127.0.0.1:8337 --listen=[::1]:11048 | IPv4 localhost on port 8337 and IPv6 localhost on port 11048 |
| --rpclisten=:11048 --listen=:8337 | all interfaces on ports 11048 and 8337 |
The following config file would configure the pod RPC server to listen to all interfaces on the default port, including
external interfaces, for both IPv4 and IPv6:
```text
[Application Options]
rpclisten=
```
As well as the standard port 11048, which delivers sha256d block templates, there is another 8 RPC ports that open
sequentially up to 11056 with each one providing a specific block template version number.

201
cmd/node/docs/configuring_tor.md Executable file
View File

@@ -0,0 +1,201 @@
### Table of Contents
1. [Overview](#Overview)<br />
2. [Client-Only](#Client)<br />
2.1 [Description](#ClientDescription)<br />
2.2 [Command Line Example](#ClientCLIExample)<br />
2.3 [Config File Example](#ClientConfigFileExample)<br />
3. [Client-Server via Tor Hidden Service](#HiddenService)<br />
3.1 [Description](#HiddenServiceDescription)<br />
3.2 [Command Line Example](#HiddenServiceCLIExample)<br />
3.3 [Config File Example](#HiddenServiceConfigFileExample)<br />
4. [Bridge Mode (Not Anonymous)](#Bridge)<br />
4.1 [Description](#BridgeDescription)<br />
4.2 [Command Line Example](#BridgeCLIExample)<br />
4.3 [Config File Example](#BridgeConfigFileExample)<br />
5. [Tor Stream Isolation](#TorStreamIsolation)<br />
5.1 [Description](#TorStreamIsolationDescription)<br />
5.2 [Command Line Example](#TorStreamIsolationCLIExample)<br />
5.3 [Config File Example](#TorStreamIsolationFileExample)<br />
<a name="Overview"></a>
### 1. Overview
pod provides full support for anonymous networking via the [Tor Project](https://www.torproject.org/),
including [client-only](#Client) and [hidden service](#HiddenService) configurations along
with [stream isolation](#TorStreamIsolation). In addition, pod supports a hybrid, [bridge mode](#Bridge) which is not
anonymous, but allows it to operate as a bridge between regular nodes and hidden service nodes without routing the
regular connections through Tor.
While it is easier to only run as a client, it is more beneficial to the Bitcoin network to run as both a client and a
server so others may connect to you to as you are connecting to them. We recommend you take the time to setup a Tor
hidden service for this reason.
<a name="Client"></a>
### 2. Client-Only
<a name="ClientDescription"></a>
**2.1 Description**<br />
Configuring pod as a Tor client is straightforward. The first step is obviously to install Tor and ensure it is working.
Once that is done, all that typically needs to be done is to specify the `--proxy` flag via the pod command line or in
the pod configuration file. Typically the Tor proxy address will be 127.0.0.1:9050 (if using standalone Tor) or
127.0.0.1:9150 (if using the Tor Browser Bundle). If you have Tor configured to require a username and password, you may
specify them with the `--proxyuser` and `--proxypass` flags.
By default, pod assumes the proxy specified with `--proxy` is a Tor proxy and hence will send all traffic, including DNS
resolution requests, via the specified proxy.
NOTE: Specifying the `--proxy` flag disables listening by default since you will not be reachable for inbound
connections unless you also configure a Tor [hidden service](#HiddenService).
<a name="ClientCLIExample"></a>
**2.2 Command Line Example**<br />
```bash
$ ./pod --proxy=127.0.0.1:9050
```
<a name="ClientConfigFileExample"></a>
**2.3 Config File Example**<br />
```text
[Application Options]
proxy=127.0.0.1:9050
```
<a name="HiddenService"></a>
### 3. Client-Server via Tor Hidden Service
<a name="HiddenServiceDescription"></a>
**3.1 Description**<br />
The first step is to configure Tor to provide a hidden service. Documentation for this can be found on the Tor project
website [here](https://www.torproject.org/docs/tor-hidden-service.html.en). However, there is no need to install a web
server locally as the linked instructions discuss since pod will act as the server.
In short, the instructions linked above entail modifying your `torrc` file to add something similar to the following,
restarting Tor, and opening the `hostname` file in the `HiddenServiceDir` to obtain your hidden service .onion address.
```text
HiddenServiceDir /var/tor/pod
HiddenServicePort 11047 127.0.0.1:11047
```
Once Tor is configured to provide the hidden service and you have obtained your generated .onion address, configuring
pod as a Tor hidden service requires three flags:
- `--proxy` to identify the Tor (SOCKS 5) proxy to use for outgoing traffic. This is typically 127.0.0.1:9050.
- `--listen` to enable listening for inbound connections since `--proxy` disables listening by default
- `--externalip` to set the .onion address that is advertised to other peers
<a name="HiddenServiceCLIExample"></a>
**3.2 Command Line Example**<br />
```bash
$ ./pod --proxy=127.0.0.1:9050 --listen=127.0.0.1 --externalip=fooanon.onion
```
<a name="HiddenServiceConfigFileExample"></a>
**3.3 Config File Example**<br />
```text
[Application Options]
proxy=127.0.0.1:9050
listen=127.0.0.1
externalip=fooanon.onion
```
<a name="Bridge"></a>
### 4. Bridge Mode (Not Anonymous)
<a name="BridgeDescription"></a>
**4.1 Description**<br />
pod provides support for operating as a bridge between regular nodes and hidden service nodes. In particular this means
only traffic which is directed to or from a .onion address is sent through Tor while other traffic is sent normally. _As
a result, this mode is **NOT** anonymous._ This mode works by specifying an onion-specific proxy, which is pointed at
Tor, by using the `--onion` flag via the pod command line or in the pod configuration file. If you have Tor configured
to require a username and password, you may specify them with the `--onionuser` and `--onionpass` flags.
NOTE: This mode will also work in conjunction with a hidden service which means you could accept inbound connections
both via the normal network and to your hidden service through the Tor network. To enable your hidden service in bridge
mode, you only need to specify your hidden service's .onion address via the `--externalip` flag since traffic to and
from .onion addresses are already routed via Tor due to the `--onion` flag.
<a name="BridgeCLIExample"></a>
**4.2 Command Line Example**<br />
```bash
$ ./pod --onion=127.0.0.1:9050 --externalip=fooanon.onion
```
<a name="BridgeConfigFileExample"></a>
**4.3 Config File Example**<br />
```text
[Application Options]
onion=127.0.0.1:9050
externalip=fooanon.onion
```
<a name="TorStreamIsolation"></a>
### 5. Tor Stream Isolation
<a name="TorStreamIsolationDescription"></a>
**5.1 Description**<br />
Tor stream isolation forces Tor to build a new circuit for each connection making it harder to correlate connections.
pod provides support for Tor stream isolation by using the `--torisolation` flag. This option requires --proxy or
--onionproxy to be set.
<a name="TorStreamIsolationCLIExample"></a>
**5.2 Command Line Example**<br />
```bash
$ ./pod --proxy=127.0.0.1:9050 --torisolation
```
<a name="TorStreamIsolationFileExample"></a>
**5.3 Config File Example**<br />
```text
[Application Options]
proxy=127.0.0.1:9050
torisolation=1
```

12
cmd/node/docs/default_ports.md Executable file
View File

@@ -0,0 +1,12 @@
While pod is highly configurable when it comes to the network configuration, the following is intended to be a quick
reference for the default ports used so port forwarding can be configured as required.
pod provides a `--upnp` flag which can be used to automatically map the bitcoin peer-to-peer listening port if your
router supports UPnP. If your router does not support UPnP, or you don't wish to use it, please note that only the
bitcoin peer-to-peer port should be forwarded unless you specifically want to allow RPC access to your pod from external
sources such as in more advanced network configurations.
| Name | Port |
| --------------------------------- | --------- |
| Default Bitcoin peer-to-peer port | TCP 11047 |
| Default RPC port | TCP 11048 |

1484
cmd/node/docs/json_rpc_api.md Executable file

File diff suppressed because it is too large Load Diff

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

View File

@@ -0,0 +1,103 @@
# Parabolic Filter Difficulty Adjustment
The difficulty adjustment algorithm implemented on the testnet, which can be found [here](../blockchain/difficulty.go)
uses a simple cubic binomial formula that generates a variance against a straight linear multiplication of the average
divergence from target that has a relatively wide, flat area that acts as a trap due to the minimal adjustment it
computes within the centre 1/3, approximately, though as can be seen it is a little sharper above 1 than below:
![](parabolic-diff-adjustment-filter.png)
The formula is as follows:
![](parabolic-diff-adjustment-filter-formula.png)
The data used for adjustment is the timestamps of the most recent block of an algorithm, and a fixed averaging window,
which will initially be set at approximately 5 days (1440 blocks). Weighting, removing outliers, and other techniques
were considered, but due to the fact that this essentially both a fuzzy logic control system, and being a poisson point
process, it has been decided that for the first hard fork, as proven in initial testing, is already far superior to the
previous mechanism that captures the most recent 10 blocks of a given algorithm, independently.
The performance of the original continuous difficulty adjustment system is very problematic. It settles into a very wide
variance, with clusters of short blocks followed by very long gaps. It is further destabilised by wide variance of
network hashpower, and the red line in the chart above roughly illustrates the variance/response that currently exists,
though it fails even to work as well as a simple variance of the target directly by multiplying the divergence ratio by
the previous difficulty, even with only one algorithm running, on a loopback testnet.
## The problems of existing difficulty adjustment regimes
Different cryptocurrencies have implemented a wide variety of difficulty adjustment schemes, generally the older coins
with a bigger miner userbase have simpler, not even continuous adjusting, bitcoin simply makes an adjustment every
2016 (2 weeks) blocks. This kind of strategy is adequate in the case of a relatively stable hashrate on the network, but
increasingly miners are using automated systems to mine the most profitable/and/or/easy (difficulty) coins instead,
which often leads to even more volatile hashrates and consequent stress testing of the difficulty adjustments.
### Aliasing, the number one problem
The biggest problem with difficulty adjustment systems tends to be based on the low precision of the block timestamps,
the low effective difficulty targeting precision, and due to the square-edged stair-step nature of such coarse
resolution, is very vulnerable to falling into resonance feedback as the sharp harmonics implicitly existing in a coarse
sampling system can rapidly get caught in wildly incorrect adjustments.
The fundamental fact is that what we call 'square' in mathematics is in fact literally infinite exponents. You can see
this very easily by simply plotting y=x<sup>n</sup> curves with large values of `n`. In an adjustment, the calculation
can jump dramatically between one point and the next in the grid, and then it can trigger resonances built from common
factors in the coarse granuality, sending difficulty either up or down far beyond the change in actual hashrate.
The problem grows worse and worse as you try to reduce the block time, and then further compounding the dysregulation,
random network latency, partitioning and partial partitioning such as near-partitioning can cause parts of the network
to desynchronise dramatically, and the resonant cascades are fed with further fluctuating changes in latency, leading to
high levels of orphan blocks, inaccurate adjustments, and dwells on outlying edges of the normal of the target.
For this, the difficulty adjustment flips the last two bits of the compressed 'bits' value. This is not an excessive
variance, and basically tends to 'wiggle the tail' of the adjustments so that they are far less likely to fall into
dwell points and go nonlinear. Because it is deterministic, all nodes can easily agree on the correct value based on a
block timestamp and previous block difficulty target, but the result, like the source data, is stochastic, which helps
eliminate the effects of aliasing distortion.
### Parabolic response curve
Most proof of work difficulty adjustment regimes use linear functions to find the target created in a block for the next
block to hit. This geometry implicitly creates increasing instability because of the aliasing, as an angular edge, as
mentioned before, has very high harmonics (power/parabolic function with a high index). So in this algorithm we use the
nice, smooth cubic binomial as shown above, which basically adjusts the difficulty less the closer it is to target.
The area has to be reasonably wide, but not too wide. The constant linear adjustment caused by a linear averaging filter
will cause this target to frequenntly be missed, and most often resulting in a periodic variance that never really gets
satisfactorily close to keeping the block time stable.
But the worst part of a linear adjustment is that its intrinsic harmonics, created by granularity and aliasing, provide
an attack surface for hashrate based attacks.
So the parabolic filter adjustment will instead converge slowly, and because of the dithering of the lowest bits of the
difficulty target, it will usually avoid dwell points and compounding of common factors. The dithering also helps make
it so that as the divergence starts to grow, if hashrate has significantly changed, its stochastic nature will make it
more possible for the adjustment to more rapidly adjust.
## Conclusion
By adding a small amount of extra noise to the computations, we diminish the effect of common factors leading to dwells
and sharp variation, and when wide variance occurs, the curve increases the attack of the adjustment in proportion,
smoothly, with the width of the divergence.
The issue of difficulty adjustment becomes a bigger and bigger problem for Proof of Work blockchains the greater the
amount of available hashpower becomes. Chains that are on the margins of hashrate distribution between chains are more
and more vulnerable to 51% attacks and more sophisticated timing based attacks that usually aim to freeze the clock in a
high work side chain that passes the chain tip.
These issues are a huge problem and the only solution can be, without abandoning the anti-spam proof of work system
altogether, to improve the defences against the various attacks. Eliminating resonances is a big part of this, as they
often are the easiest way to lower the real hashpower requuired to launch a 51% attack.
Further techniques, which will become more relevant with a higher transaction volume, is to diminish the incentive for
very large miners to gobble up all the available transactions far outside of the normal, leaving the rest of the miners,
of which many are loyal to a coin, out of pocket.
The withholding attack on pools is another issue, and part of the solution with this lies in ensuring that transactions
are more spread out in their placement in blocks, the way that this will be done is inspired by the ideas in Freshcoin,
which raises difficulty target along with the block weight.
Another approach that will be explored is low reward minimum difficulty blocks. These are tricky to implement in a live
network with a lot of miners because of the problem of network synchronisation. The solution would seem to lie in
setting a boundary but making it fuzzy enough that these blocks do not cause large numbers of orphans. The other problem
with this is to do with block timestamp based attacks. So for this reason, such changes will be put off for future
exploration.

11
cmd/node/integration/README.md Executable file
View File

@@ -0,0 +1,11 @@
# integration
[![ISC License](http://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
This contains integration tests which make use of
the [rpctest](https://github.com/p9c/p9/tree/master/integration/rpctest)
package to programmatically drive nodes via RPC.
## License
This code is licensed under the [copyfree](http://copyfree.org) ISC License.

View File

@@ -0,0 +1,43 @@
package integration
import (
"github.com/p9c/p9/pkg/log"
"github.com/p9c/p9/version"
)
var subsystem = log.AddLoggerSubsystem(version.PathBase)
var F, E, W, I, D, T log.LevelPrinter = log.GetLogPrinterSet(subsystem)
func init() {
// to filter out this package, uncomment the following
// var _ = logg.AddFilteredSubsystem(subsystem)
// to highlight this package, uncomment the following
// var _ = logg.AddHighlightedSubsystem(subsystem)
// these are here to test whether they are working
// F.Ln("F.Ln")
// E.Ln("E.Ln")
// W.Ln("W.Ln")
// I.Ln("I.Ln")
// D.Ln("D.Ln")
// F.Ln("T.Ln")
// F.F("%s", "F.F")
// E.F("%s", "E.F")
// W.F("%s", "W.F")
// I.F("%s", "I.F")
// D.F("%s", "D.F")
// T.F("%s", "T.F")
// F.C(func() string { return "F.C" })
// E.C(func() string { return "E.C" })
// W.C(func() string { return "W.C" })
// I.C(func() string { return "I.C" })
// D.C(func() string { return "D.C" })
// T.C(func() string { return "T.C" })
// F.C(func() string { return "F.C" })
// E.Chk(errors.New("E.Chk"))
// W.Chk(errors.New("W.Chk"))
// I.Chk(errors.New("I.Chk"))
// D.Chk(errors.New("D.Chk"))
// T.Chk(errors.New("T.Chk"))
}

View File

@@ -0,0 +1,4 @@
package integration
// This file only exists to prevent warnings due to no buildable source files when the build tag for enabling the tests
// is not specified.

View File

@@ -0,0 +1,144 @@
package integration
import (
"bytes"
"fmt"
"os"
"runtime/debug"
"testing"
"github.com/p9c/p9/cmd/node/integration/rpctest"
"github.com/p9c/p9/pkg/chaincfg"
)
func testGetBestBlock(r *rpctest.Harness, t *testing.T) {
_, prevbestHeight, e := r.Node.GetBestBlock()
if e != nil {
t.Fatalf("Call to `getbestblock` failed: %v", e)
}
// Create a new block connecting to the current tip.
generatedBlockHashes, e := r.Node.Generate(1)
if e != nil {
t.Fatalf("Unable to generate block: %v", e)
}
bestHash, bestHeight, e := r.Node.GetBestBlock()
if e != nil {
t.Fatalf("Call to `getbestblock` failed: %v", e)
}
// Hash should be the same as the newly submitted block.
if !bytes.Equal(bestHash[:], generatedBlockHashes[0][:]) {
t.Fatalf(
"Block hashes do not match. Returned hash %v, wanted "+
"hash %v", bestHash, generatedBlockHashes[0][:],
)
}
// Block height should now reflect newest height.
if bestHeight != prevbestHeight+1 {
t.Fatalf(
"Block heights do not match. Got %v, wanted %v",
bestHeight, prevbestHeight+1,
)
}
}
func testGetBlockCount(r *rpctest.Harness, t *testing.T) {
// Save the current count.
currentCount, e := r.Node.GetBlockCount()
if e != nil {
t.Fatalf("Unable to get block count: %v", e)
}
if _, e = r.Node.Generate(1); E.Chk(e) {
t.Fatalf("Unable to generate block: %v", e)
}
// Count should have increased by one.
newCount, e := r.Node.GetBlockCount()
if e != nil {
t.Fatalf("Unable to get block count: %v", e)
}
if newCount != currentCount+1 {
t.Fatalf(
"Block count incorrect. Got %v should be %v",
newCount, currentCount+1,
)
}
}
func testGetBlockHash(r *rpctest.Harness, t *testing.T) {
// Create a new block connecting to the current tip.
generatedBlockHashes, e := r.Node.Generate(1)
if e != nil {
t.Fatalf("Unable to generate block: %v", e)
}
info, e := r.Node.GetInfo()
if e != nil {
t.Fatalf("call to getinfo cailed: %v", e)
}
blockHash, e := r.Node.GetBlockHash(int64(info.Blocks))
if e != nil {
t.Fatalf("Call to `getblockhash` failed: %v", e)
}
// Block hashes should match newly created block.
if !bytes.Equal(generatedBlockHashes[0][:], blockHash[:]) {
t.Fatalf(
"Block hashes do not match. Returned hash %v, wanted "+
"hash %v", blockHash, generatedBlockHashes[0][:],
)
}
}
var rpcTestCases = []rpctest.HarnessTestCase{
testGetBestBlock,
testGetBlockCount,
testGetBlockHash,
}
var primaryHarness *rpctest.Harness
func TestMain(m *testing.M) {
var e error
// In order to properly test scenarios on as if we were on mainnet, ensure that non-standard transactions aren't
// accepted into the mempool or relayed.
podCfg := []string{"--rejectnonstd"}
primaryHarness, e = rpctest.New(&chaincfg.SimNetParams, nil, podCfg)
if e != nil {
fmt.Println("unable to create primary harness: ", e)
os.Exit(1)
}
// Initialize the primary mining node with a chain of length 125, providing 25 mature coinbases to allow spending
// from for testing purposes.
if e := primaryHarness.SetUp(true, 25); E.Chk(e) {
fmt.Println("unable to setup test chain: ", e)
// Even though the harness was not fully setup, it still needs to be torn down to ensure all resources such as
// temp directories are cleaned up. The error is intentionally ignored since this is already an error path and
// nothing else could be done about it anyways.
_ = primaryHarness.TearDown()
os.Exit(1)
}
exitCode := m.Run()
// Clean up any active harnesses that are still currently running. This includes removing all temporary directories,
// and shutting down any created processes.
if e := rpctest.TearDownAll(); E.Chk(e) {
fmt.Println("unable to tear down all harnesses: ", e)
os.Exit(1)
}
os.Exit(exitCode)
}
func TestRpcServer(t *testing.T) {
var currentTestNum int
defer func() {
// If one of the integration tests caused a panic within the main goroutine, then tear down all the harnesses in
// order to avoid any leaked pod processes.
if r := recover(); r != nil {
fmt.Println("recovering from test panic: ", r)
if e := rpctest.TearDownAll(); E.Chk(e) {
fmt.Println("unable to tear down all harnesses: ", e)
}
t.Fatalf("test #%v panicked: %s", currentTestNum, debug.Stack())
}
}()
for _, testCase := range rpcTestCases {
testCase(primaryHarness, t)
currentTestNum++
}
}

View File

@@ -0,0 +1,27 @@
# rpctest
[![ISC License](http://img.shields.io/badge/license-ISC-blue.svg)](http://copyfree.org)
[![GoDoc](https://img.shields.io/badge/godoc-reference-blue.svg)](http://godoc.org/github.com/p9c/p9/integration/rpctest)
Package rpctest provides a pod-specific RPC testing harness crafting and
executing integration tests by driving a `pod`
instance via the `RPC` interface. Each instance of an active harness comes
equipped with a simple in-memory HD wallet capable of properly syncing to the
generated chain, creating new addresses, and crafting fully signed transactions
paying to an arbitrary set of outputs.
This package was designed specifically to act as an RPC testing harness
for `pod`. However, the constructs presented are general enough to be adapted to
any project wishing to programmatically drive a `pod` instance of its
systems/integration tests.
## Installation and Updating
```bash
$ go get -u github.com/p9c/p9/integration/rpctest
```
## License
Package rpctest is licensed under the [copyfree](http://copyfree.org) ISC
License.

View File

@@ -0,0 +1,202 @@
package rpctest
import (
"errors"
"math"
"math/big"
"runtime"
"time"
"github.com/p9c/p9/pkg/block"
"github.com/p9c/p9/pkg/btcaddr"
"github.com/p9c/p9/pkg/blockchain"
"github.com/p9c/p9/pkg/chaincfg"
"github.com/p9c/p9/pkg/chainhash"
"github.com/p9c/p9/pkg/txscript"
"github.com/p9c/p9/pkg/util"
"github.com/p9c/p9/pkg/wire"
)
// solveBlock attempts to find a nonce which makes the passed block header hash to a value less than the target
// difficulty. When a successful solution is found true is returned and the nonce field of the passed header is updated
// with the solution. False is returned if no solution exists.
func solveBlock(header *wire.BlockHeader, targetDifficulty *big.Int) bool {
// sbResult is used by the solver goroutines to send results.
type sbResult struct {
found bool
nonce uint32
}
// solver accepts a block header and a nonce range to test. It is intended to be run as a goroutine.
quit := make(chan bool)
results := make(chan sbResult)
solver := func(hdr wire.BlockHeader, startNonce, stopNonce uint32) {
// We need to modify the nonce field of the header, so make sure we work with a copy of the original header.
for i := startNonce; i >= startNonce && i <= stopNonce; i++ {
select {
case <-quit:
return
default:
hdr.Nonce = i
hash := hdr.BlockHash()
if blockchain.HashToBig(&hash).Cmp(targetDifficulty) <= 0 {
select {
case results <- sbResult{true, i}:
return
case <-quit:
return
}
}
}
}
select {
case results <- sbResult{false, 0}:
case <-quit:
return
}
}
startNonce := uint32(0)
stopNonce := uint32(math.MaxUint32)
numCores := uint32(runtime.NumCPU())
noncesPerCore := (stopNonce - startNonce) / numCores
for i := uint32(0); i < numCores; i++ {
rangeStart := startNonce + (noncesPerCore * i)
rangeStop := startNonce + (noncesPerCore * (i + 1)) - 1
if i == numCores-1 {
rangeStop = stopNonce
}
go solver(*header, rangeStart, rangeStop)
}
for i := uint32(0); i < numCores; i++ {
result := <-results
if result.found {
close(quit)
header.Nonce = result.nonce
return true
}
}
return false
}
// standardCoinbaseScript returns a standard script suitable for use as the signature script of the coinbase transaction
// of a new block. In particular, it starts with the block height that is required by version 2 blocks.
func standardCoinbaseScript(nextBlockHeight int32, extraNonce uint64) ([]byte, error) {
return txscript.NewScriptBuilder().AddInt64(int64(nextBlockHeight)).
AddInt64(int64(extraNonce)).Script()
}
// createCoinbaseTx returns a coinbase transaction paying an appropriate subsidy based on the passed block height to the
// provided address.
func createCoinbaseTx(
coinbaseScript []byte,
nextBlockHeight int32,
addr btcaddr.Address,
mineTo []wire.TxOut,
net *chaincfg.Params,
version int32,
) (*util.Tx, error) {
// Create the script to pay to the provided payment address.
pkScript, e := txscript.PayToAddrScript(addr)
if e != nil {
return nil, e
}
tx := wire.NewMsgTx(wire.TxVersion)
tx.AddTxIn(
&wire.TxIn{
// Coinbase transactions have no inputs, so previous outpoint is zero hash and max index.
PreviousOutPoint: *wire.NewOutPoint(
&chainhash.Hash{},
wire.MaxPrevOutIndex,
),
SignatureScript: coinbaseScript,
Sequence: wire.MaxTxInSequenceNum,
},
)
if len(mineTo) == 0 {
tx.AddTxOut(
&wire.TxOut{
Value: blockchain.CalcBlockSubsidy(nextBlockHeight, net, version),
PkScript: pkScript,
},
)
} else {
for i := range mineTo {
tx.AddTxOut(&mineTo[i])
}
}
return util.NewTx(tx), nil
}
// CreateBlock creates a new block building from the previous block with a specified blockversion and timestamp. If the
// timestamp passed is zero ( not initialized), then the timestamp of the previous block will be used plus 1 second is
// used. Passing nil for the previous block results in a block that builds off of the genesis block for the specified
// chain.
func CreateBlock(
prevBlock *block.Block, inclusionTxs []*util.Tx,
blockVersion int32, blockTime time.Time, miningAddr btcaddr.Address,
mineTo []wire.TxOut, net *chaincfg.Params,
) (*block.Block, error) {
var (
prevHash *chainhash.Hash
blockHeight int32
prevBlockTime time.Time
)
// If the previous block isn't specified, then we'll construct a block that builds off of the genesis block for the
// chain.
if prevBlock == nil {
prevHash = net.GenesisHash
blockHeight = 1
prevBlockTime = net.GenesisBlock.Header.Timestamp.Add(time.Minute)
} else {
prevHash = prevBlock.Hash()
blockHeight = prevBlock.Height() + 1
prevBlockTime = prevBlock.WireBlock().Header.Timestamp
}
// If a target block time was specified, then use that as the header's timestamp. Otherwise, add one second to the
// previous block unless it's the genesis block in which case use the current time.
var ts time.Time
switch {
case !blockTime.IsZero():
ts = blockTime
default:
ts = prevBlockTime.Add(time.Second)
}
extraNonce := uint64(0)
coinbaseScript, e := standardCoinbaseScript(blockHeight, extraNonce)
if e != nil {
return nil, e
}
coinbaseTx, e := createCoinbaseTx(
coinbaseScript, blockHeight, miningAddr,
mineTo, net, blockVersion,
)
if e != nil {
return nil, e
}
// Create a new block ready to be solved.
blockTxns := []*util.Tx{coinbaseTx}
if inclusionTxs != nil {
blockTxns = append(blockTxns, inclusionTxs...)
}
merkles := blockchain.BuildMerkleTreeStore(blockTxns, false)
var b wire.Block
b.Header = wire.BlockHeader{
Version: blockVersion,
PrevBlock: *prevHash,
MerkleRoot: *merkles.GetRoot(),
Timestamp: ts,
Bits: net.PowLimitBits,
}
for _, tx := range blockTxns {
if e := b.AddTransaction(tx.MsgTx()); E.Chk(e) {
return nil, e
}
}
found := solveBlock(&b.Header, net.PowLimit)
if !found {
return nil, errors.New("unable to solve block")
}
utilBlock := block.NewBlock(&b)
utilBlock.SetHeight(blockHeight)
return utilBlock, nil
}

View File

@@ -0,0 +1,64 @@
package rpctest
import (
"fmt"
"go/build"
"os/exec"
"path/filepath"
"runtime"
"sync"
"github.com/p9c/p9/pkg/util/gobin"
)
var (
// compileMtx guards access to the executable path so that the project is only compiled once.
compileMtx sync.Mutex
// executablePath is the path to the compiled executable. This is the empty string until pod is compiled. This
// should not be accessed directly; instead use the function podExecutablePath().
executablePath string
)
// podExecutablePath returns a path to the pod executable to be used by rpctests. To ensure the code tests against the
// most up-to-date version of pod, this method compiles pod the first time it is called. After that, the generated
// binary is used for subsequent test harnesses. The executable file is not cleaned up, but since it lives at a static
// path in a temp directory, it is not a big deal.
func podExecutablePath() (string, error) {
compileMtx.Lock()
defer compileMtx.Unlock()
// If pod has already been compiled, just use that.
if len(executablePath) != 0 {
return executablePath, nil
}
testDir, e := baseDir()
if e != nil {
return "", e
}
// Determine import path of this package. Not necessarily pod if this is a forked repo.
_, rpctestDir, _, ok := runtime.Caller(1)
if !ok {
return "", fmt.Errorf("cannot get path to pod source code")
}
podPkgPath := filepath.Join(rpctestDir, "..", "..", "..")
podPkg, e := build.ImportDir(podPkgPath, build.FindOnly)
if e != nil {
return "", fmt.Errorf("failed to podbuild pod: %v", e)
}
// Build pod and output an executable in a static temp path.
outputPath := filepath.Join(testDir, "pod")
if runtime.GOOS == "windows" {
outputPath += ".exe"
}
var gb string
if gb, e = gobin.Get(); E.Chk(e) {
return "", e
}
cmd := exec.Command(gb, "podbuild", "-o", outputPath, podPkg.ImportPath)
e = cmd.Run()
if e != nil {
return "", fmt.Errorf("failed to podbuild pod: %v", e)
}
// Save executable path so future calls do not recompile.
executablePath = outputPath
return executablePath, nil
}

View File

@@ -0,0 +1,13 @@
// Package rpctest provides a pod-specific RPC testing harness crafting and executing integration tests by driving a
// `pod` instance via the `RPC` interface.
//
// Each instance of an active harness comes equipped with a simple in-memory HD wallet capable of properly syncing to
// the generated chain, creating new addresses and crafting fully signed transactions paying to an arbitrary set of
// outputs.
//
// This package was designed specifically to act as an RPC testing
// harness for `pod`.
//
// However the constructs presented are general enough to be adapted to any project wishing to programmatically drive a
// `pod` instance of its systems integration tests.
package rpctest

View File

@@ -0,0 +1,43 @@
package rpctest
import (
"github.com/p9c/p9/pkg/log"
"github.com/p9c/p9/version"
)
var subsystem = log.AddLoggerSubsystem(version.PathBase)
var F, E, W, I, D, T log.LevelPrinter = log.GetLogPrinterSet(subsystem)
func init() {
// to filter out this package, uncomment the following
// var _ = logg.AddFilteredSubsystem(subsystem)
// to highlight this package, uncomment the following
// var _ = logg.AddHighlightedSubsystem(subsystem)
// these are here to test whether they are working
// F.Ln("F.Ln")
// E.Ln("E.Ln")
// W.Ln("W.Ln")
// I.Ln("I.Ln")
// D.Ln("D.Ln")
// F.Ln("T.Ln")
// F.F("%s", "F.F")
// E.F("%s", "E.F")
// W.F("%s", "W.F")
// I.F("%s", "I.F")
// D.F("%s", "D.F")
// T.F("%s", "T.F")
// F.C(func() string { return "F.C" })
// E.C(func() string { return "E.C" })
// W.C(func() string { return "W.C" })
// I.C(func() string { return "I.C" })
// D.C(func() string { return "D.C" })
// T.C(func() string { return "T.C" })
// F.C(func() string { return "F.C" })
// E.Chk(errors.New("E.Chk"))
// W.Chk(errors.New("W.Chk"))
// I.Chk(errors.New("I.Chk"))
// D.Chk(errors.New("D.Chk"))
// T.Chk(errors.New("T.Chk"))
}

View File

@@ -0,0 +1,498 @@
package rpctest
import (
"bytes"
"encoding/binary"
"fmt"
"sync"
"github.com/p9c/p9/pkg/amt"
"github.com/p9c/p9/pkg/btcaddr"
"github.com/p9c/p9/pkg/chaincfg"
"github.com/p9c/p9/pkg/qu"
"github.com/p9c/p9/pkg/blockchain"
"github.com/p9c/p9/pkg/chainhash"
ec "github.com/p9c/p9/pkg/ecc"
"github.com/p9c/p9/pkg/rpcclient"
"github.com/p9c/p9/pkg/txscript"
"github.com/p9c/p9/pkg/util"
"github.com/p9c/p9/pkg/util/hdkeychain"
"github.com/p9c/p9/pkg/wire"
)
var (
// hdSeed is the BIP 32 seed used by the memWallet to initialize it's HD root key. This value is hard coded in order
// to ensure deterministic behavior across test runs.
hdSeed = [chainhash.HashSize]byte{
0x79, 0xa6, 0x1a, 0xdb, 0xc6, 0xe5, 0xa2, 0xe1,
0x39, 0xd2, 0x71, 0x3a, 0x54, 0x6e, 0xc7, 0xc8,
0x75, 0x63, 0x2e, 0x75, 0xf1, 0xdf, 0x9c, 0x3f,
0xa6, 0x01, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
}
)
// utxo represents an unspent output spendable by the memWallet. The maturity height of the transaction is recorded in
// order to properly observe the maturity period of direct coinbase outputs.
type utxo struct {
pkScript []byte
value amt.Amount
keyIndex uint32
maturityHeight int32
isLocked bool
}
// isMature returns true if the target utxo is considered "mature" at the passed block height. Otherwise, false is
// returned.
func (u *utxo) isMature(height int32) bool {
return height >= u.maturityHeight
}
// chainUpdate encapsulates an update to the current main chain. This struct is used to sync up the memWallet each time
// a new block is connected to the main chain.
type chainUpdate struct {
filteredTxns []*util.Tx
blockHeight int32
isConnect bool // True if connect, false if disconnect
}
// undoEntry is functionally the opposite of a chainUpdate. An undoEntry is created for each new block received, then
// stored in a log in order to properly handle block re-orgs.
type undoEntry struct {
utxosDestroyed map[wire.OutPoint]*utxo
utxosCreated []wire.OutPoint
}
// memWallet is a simple in-memory wallet whose purpose is to provide basic wallet functionality to the harness. The
// wallet uses a hard-coded HD key hierarchy which promotes reproducibility between harness test runs.
type memWallet struct {
coinbaseKey *ec.PrivateKey
coinbaseAddr btcaddr.Address
// hdRoot is the root master private key for the wallet.
hdRoot *hdkeychain.ExtendedKey
// hdIndex is the next available key index offset from the hdRoot.
hdIndex uint32
// currentHeight is the latest height the wallet is known to be synced to.
currentHeight int32
// addrs tracks all addresses belonging to the wallet.
// The addresses are indexed by their keypath from the hdRoot.
addrs map[uint32]btcaddr.Address
// utxos is the set of utxos spendable by the wallet.
utxos map[wire.OutPoint]*utxo
// reorgJournal is a map storing an undo entry for each new block received. Once a block is disconnected, the undo
// entry for the particular height is evaluated, thereby rewinding the effect of the disconnected block on the
// wallet's set of spendable utxos.
reorgJournal map[int32]*undoEntry
chainUpdates []*chainUpdate
chainUpdateSignal qu.C
chainMtx sync.Mutex
net *chaincfg.Params
rpc *rpcclient.Client
sync.RWMutex
}
// newMemWallet creates and returns a fully initialized instance of the memWallet given a particular blockchain's
// parameters.
func newMemWallet(net *chaincfg.Params, harnessID uint32) (*memWallet, error) {
// The wallet's final HD seed is: hdSeed || harnessID. This method ensures that each harness instance uses a
// deterministic root seed based on its harness ID.
var harnessHDSeed [chainhash.HashSize + 4]byte
copy(harnessHDSeed[:], hdSeed[:])
binary.BigEndian.PutUint32(harnessHDSeed[:chainhash.HashSize], harnessID)
hdRoot, e := hdkeychain.NewMaster(harnessHDSeed[:], net)
if e != nil {
return nil, nil
}
// The first child key from the hd root is reserved as the coinbase generation address.
coinbaseChild, e := hdRoot.Child(0)
if e != nil {
return nil, e
}
coinbaseKey, e := coinbaseChild.ECPrivKey()
if e != nil {
return nil, e
}
coinbaseAddr, e := keyToAddr(coinbaseKey, net)
if e != nil {
return nil, e
}
// Track the coinbase generation address to ensure we properly track newly generated DUO we can spend.
addrs := make(map[uint32]btcaddr.Address)
addrs[0] = coinbaseAddr
return &memWallet{
net: net,
coinbaseKey: coinbaseKey,
coinbaseAddr: coinbaseAddr,
hdIndex: 1,
hdRoot: hdRoot,
addrs: addrs,
utxos: make(map[wire.OutPoint]*utxo),
chainUpdateSignal: qu.T(),
reorgJournal: make(map[int32]*undoEntry),
}, nil
}
// Start launches all goroutines required for the wallet to function properly.
func (m *memWallet) Start() {
go m.chainSyncer()
}
// SyncedHeight returns the height the wallet is known to be synced to. This function is safe for concurrent access.
func (m *memWallet) SyncedHeight() int32 {
m.RLock()
defer m.RUnlock()
return m.currentHeight
}
// SetRPCClient saves the passed rpc connection to pod as the wallet's personal rpc connection.
func (m *memWallet) SetRPCClient(rpcClient *rpcclient.Client) {
m.rpc = rpcClient
}
// IngestBlock is a call-back which is to be triggered each time a new block is connected to the main chain. It queues
// the update for the chain syncer, calling the private version in sequential order.
func (m *memWallet) IngestBlock(height int32, header *wire.BlockHeader, filteredTxns []*util.Tx) {
// Append this new chain update to the end of the queue of new chain
// updates.
m.chainMtx.Lock()
m.chainUpdates = append(
m.chainUpdates, &chainUpdate{
filteredTxns,
height,
true,
},
)
m.chainMtx.Unlock()
// Launch a goroutine to signal the chainSyncer that a new update is available. We do this in a new goroutine in
// order to avoid blocking the main loop of the rpc client.
go func() {
m.chainUpdateSignal <- struct{}{}
}()
}
// ingestBlock updates the wallet's internal utxo state based on the outputs created and destroyed within each block.
func (m *memWallet) ingestBlock(update *chainUpdate) {
// Update the latest synced height, then process each filtered transaction in the block creating and destroying
// utxos within the wallet as a result.
m.currentHeight = update.blockHeight
undo := &undoEntry{
utxosDestroyed: make(map[wire.OutPoint]*utxo),
}
for _, tx := range update.filteredTxns {
mtx := tx.MsgTx()
isCoinbase := blockchain.IsCoinBaseTx(mtx)
txHash := mtx.TxHash()
m.evalOutputs(mtx.TxOut, &txHash, isCoinbase, undo)
m.evalInputs(mtx.TxIn, undo)
}
// Finally, record the undo entry for this block so we can properly update our internal state in response to the
// block being re-org'd from the main chain.
m.reorgJournal[update.blockHeight] = undo
}
// chainSyncer is a goroutine dedicated to processing new blocks in order to keep the wallet's utxo state up to date.
// NOTE: This MUST be run as a goroutine.
func (m *memWallet) chainSyncer() {
var update *chainUpdate
for range m.chainUpdateSignal {
// A new update is available, so pop the new chain update from the front of the update queue.
m.chainMtx.Lock()
update = m.chainUpdates[0]
m.chainUpdates[0] = nil // Set to nil to prevent GC leak.
m.chainUpdates = m.chainUpdates[1:]
m.chainMtx.Unlock()
m.Lock()
if update.isConnect {
m.ingestBlock(update)
} else {
m.unwindBlock(update)
}
m.Unlock()
}
}
// evalOutputs evaluates each of the passed outputs, creating a new matching utxo within the wallet if we're able to
// spend the output.
func (m *memWallet) evalOutputs(
outputs []*wire.TxOut, txHash *chainhash.Hash,
isCoinbase bool, undo *undoEntry,
) {
for i, output := range outputs {
pkScript := output.PkScript
// Scan all the addresses we currently control to see if the output is paying to us.
for keyIndex, addr := range m.addrs {
pkHash := addr.ScriptAddress()
if !bytes.Contains(pkScript, pkHash) {
continue
}
// If this is a coinbase output, then we mark the maturity height at the proper block height in the future.
var maturityHeight int32
if isCoinbase {
maturityHeight = m.currentHeight + int32(m.net.CoinbaseMaturity)
}
op := wire.OutPoint{Hash: *txHash, Index: uint32(i)}
m.utxos[op] = &utxo{
value: amt.Amount(output.Value),
keyIndex: keyIndex,
maturityHeight: maturityHeight,
pkScript: pkScript,
}
undo.utxosCreated = append(undo.utxosCreated, op)
}
}
}
// evalInputs scans all the passed inputs, destroying any utxos within the wallet which are spent by an input.
func (m *memWallet) evalInputs(inputs []*wire.TxIn, undo *undoEntry) {
for _, txIn := range inputs {
op := txIn.PreviousOutPoint
oldUtxo, ok := m.utxos[op]
if !ok {
continue
}
undo.utxosDestroyed[op] = oldUtxo
delete(m.utxos, op)
}
}
// UnwindBlock is a call-back which is to be executed each time a block is disconnected from the main chain. It queues
// the update for the chain syncer, calling the private version in sequential order.
func (m *memWallet) UnwindBlock(height int32, header *wire.BlockHeader) {
// Append this new chain update to the end of the queue of new chain
// updates.
m.chainMtx.Lock()
m.chainUpdates = append(
m.chainUpdates, &chainUpdate{
nil,
height,
false,
},
)
m.chainMtx.Unlock()
// Launch a goroutine to signal the chainSyncer that a new update is available. We do this in a new goroutine in
// order to avoid blocking the main loop of the rpc client.
go func() {
m.chainUpdateSignal <- struct{}{}
}()
}
// unwindBlock undoes the effect that a particular block had on the wallet's internal utxo state.
func (m *memWallet) unwindBlock(update *chainUpdate) {
undo := m.reorgJournal[update.blockHeight]
for _, utxo := range undo.utxosCreated {
delete(m.utxos, utxo)
}
for outPoint, utxo := range undo.utxosDestroyed {
m.utxos[outPoint] = utxo
}
delete(m.reorgJournal, update.blockHeight)
}
// newAddress returns a new address from the wallet's hd key chain. It also loads the address into the RPC client's
// transaction filter to ensure any transactions that involve it are delivered via the notifications.
func (m *memWallet) newAddress() (btcaddr.Address, error) {
index := m.hdIndex
childKey, e := m.hdRoot.Child(index)
if e != nil {
return nil, e
}
privKey, e := childKey.ECPrivKey()
if e != nil {
return nil, e
}
addr, e := keyToAddr(privKey, m.net)
if e != nil {
return nil, e
}
e = m.rpc.LoadTxFilter(false, []btcaddr.Address{addr}, nil)
if e != nil {
return nil, e
}
m.addrs[index] = addr
m.hdIndex++
return addr, nil
}
// NewAddress returns a fresh address spendable by the wallet. This function is safe for concurrent access.
func (m *memWallet) NewAddress() (btcaddr.Address, error) {
m.Lock()
defer m.Unlock()
return m.newAddress()
}
// fundTx attempts to fund a transaction sending amt bitcoin. The coins are selected such that the final amount spent
// pays enough fees as dictated by the passed fee rate. The passed fee rate should be expressed in satoshis-per-byte.
// The transaction being funded can optionally include a change output indicated by the change boolean. NOTE: The
// memWallet's mutex must be held when this function is called.
func (m *memWallet) fundTx(
tx *wire.MsgTx, amount amt.Amount,
feeRate amt.Amount, change bool,
) (e error) {
const (
// spendSize is the largest number of bytes of a sigScript which spends a p2pkh output: OP_DATA_73 <sig>
// OP_DATA_33 <pubkey>
spendSize = 1 + 73 + 1 + 33
)
var (
amtSelected amt.Amount
txSize int
)
for outPoint, utxo := range m.utxos {
// Skip any outputs that are still currently immature or are currently locked.
if !utxo.isMature(m.currentHeight) || utxo.isLocked {
continue
}
amtSelected += utxo.value
// Add the selected output to the transaction, updating the current tx size while accounting for the size of the
// future sigScript.
op := outPoint
tx.AddTxIn(wire.NewTxIn(&op, nil, nil))
txSize = tx.SerializeSize() + spendSize*len(tx.TxIn)
// Calculate the fee required for the txn at this point observing the specified fee rate. If we don't have
// enough coins from he current amount selected to pay the fee, then continue to grab more coins.
reqFee := amt.Amount(txSize * int(feeRate))
if amtSelected-reqFee < amount {
continue
}
// If we have any change left over and we should create a change output, then add an additional output to the
// transaction reserved for it.
changeVal := amtSelected - amount - reqFee
if changeVal > 0 && change {
addr, e := m.newAddress()
if e != nil {
return e
}
pkScript, e := txscript.PayToAddrScript(addr)
if e != nil {
return e
}
changeOutput := &wire.TxOut{
Value: int64(changeVal),
PkScript: pkScript,
}
tx.AddTxOut(changeOutput)
}
return nil
}
// If we've reached this point, then coin selection failed due to an insufficient amount of coins.
return fmt.Errorf("not enough funds for coin selection")
}
// SendOutputs creates then sends a transaction paying to the specified output while observing the passed fee rate. The
// passed fee rate should be expressed in satoshis-per-byte.
func (m *memWallet) SendOutputs(
outputs []*wire.TxOut,
feeRate amt.Amount,
) (*chainhash.Hash, error) {
tx, e := m.CreateTransaction(outputs, feeRate, true)
if e != nil {
return nil, e
}
return m.rpc.SendRawTransaction(tx, true)
}
// SendOutputsWithoutChange creates and sends a transaction that pays to the specified outputs while observing the
// passed fee rate and ignoring a change output. The passed fee rate should be expressed in sat/b.
func (m *memWallet) SendOutputsWithoutChange(
outputs []*wire.TxOut,
feeRate amt.Amount,
) (*chainhash.Hash, error) {
tx, e := m.CreateTransaction(outputs, feeRate, false)
if e != nil {
return nil, e
}
return m.rpc.SendRawTransaction(tx, true)
}
// CreateTransaction returns a fully signed transaction paying to the specified outputs while observing the desired fee
// rate. The passed fee rate should be expressed in satoshis-per-byte. The transaction being created can optionally
// include a change output indicated by the change boolean. This function is safe for concurrent access.
func (m *memWallet) CreateTransaction(
outputs []*wire.TxOut,
feeRate amt.Amount, change bool,
) (*wire.MsgTx, error) {
m.Lock()
defer m.Unlock()
tx := wire.NewMsgTx(wire.TxVersion)
// Tally up the total amount to be sent in order to perform coin selection shortly below.
var outputAmt amt.Amount
for _, output := range outputs {
outputAmt += amt.Amount(output.Value)
tx.AddTxOut(output)
}
// Attempt to fund the transaction with spendable utxos.
if e := m.fundTx(tx, outputAmt, feeRate, change); E.Chk(e) {
return nil, e
}
// Populate all the selected inputs with valid sigScript for spending. Along the way record all outputs being spent
// in order to avoid a potential double spend.
spentOutputs := make([]*utxo, 0, len(tx.TxIn))
for i, txIn := range tx.TxIn {
outPoint := txIn.PreviousOutPoint
utxo := m.utxos[outPoint]
extendedKey, e := m.hdRoot.Child(utxo.keyIndex)
if e != nil {
return nil, e
}
privKey, e := extendedKey.ECPrivKey()
if e != nil {
return nil, e
}
sigScript, e := txscript.SignatureScript(
tx, i, utxo.pkScript,
txscript.SigHashAll, privKey, true,
)
if e != nil {
return nil, e
}
txIn.SignatureScript = sigScript
spentOutputs = append(spentOutputs, utxo)
}
// As these outputs are now being spent by this newly created transaction, mark the outputs are "locked". This
// action ensures these outputs won't be double spent by any subsequent transactions. These locked outputs can be
// freed via a call to UnlockOutputs.
for _, utxo := range spentOutputs {
utxo.isLocked = true
}
return tx, nil
}
// UnlockOutputs unlocks any outputs which were previously locked due to being selected to fund a transaction via the
// CreateTransaction method. This function is safe for concurrent access.
func (m *memWallet) UnlockOutputs(inputs []*wire.TxIn) {
m.Lock()
defer m.Unlock()
for _, input := range inputs {
utxo, ok := m.utxos[input.PreviousOutPoint]
if !ok {
continue
}
utxo.isLocked = false
}
}
// ConfirmedBalance returns the confirmed balance of the wallet. This function is safe for concurrent access.
func (m *memWallet) ConfirmedBalance() amt.Amount {
m.RLock()
defer m.RUnlock()
var balance amt.Amount
for _, utxo := range m.utxos {
// Prevent any immature or locked outputs from contributing to the wallet's total confirmed balance.
if !utxo.isMature(m.currentHeight) || utxo.isLocked {
continue
}
balance += utxo.value
}
return balance
}
// keyToAddr maps the passed private to corresponding p2pkh address.
func keyToAddr(key *ec.PrivateKey, net *chaincfg.Params) (btcaddr.Address, error) {
serializedKey := key.PubKey().SerializeCompressed()
pubKeyAddr, e := btcaddr.NewPubKey(serializedKey, net)
if e != nil {
return nil, e
}
return pubKeyAddr.PubKeyHash(), nil
}

View File

@@ -0,0 +1,271 @@
package rpctest
import (
"fmt"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
"runtime"
"time"
rpc "github.com/p9c/p9/pkg/rpcclient"
"github.com/p9c/p9/pkg/util"
)
// nodeConfig contains all the args and data required to launch a pod process and connect the rpc client to it.
type nodeConfig struct {
rpcUser string
rpcPass string
listen string
rpcListen string
rpcConnect string
dataDir string
logDir string
profile string
debugLevel string
extra []string
prefix string
exe string
endpoint string
certFile string
keyFile string
certificates []byte
}
// newConfig returns a newConfig with all default values.
func newConfig(prefix, certFile, keyFile string, extra []string) (*nodeConfig, error) {
podPath, e := podExecutablePath()
if e != nil {
podPath = "pod"
}
a := &nodeConfig{
listen: "127.0.0.1:41047",
rpcListen: "127.0.0.1:41048",
rpcUser: "user",
rpcPass: "pass",
extra: extra,
prefix: prefix,
exe: podPath,
endpoint: "ws",
certFile: certFile,
keyFile: keyFile,
}
if e := a.setDefaults(); E.Chk(e) {
return nil, e
}
return a, nil
}
// setDefaults sets the default values of the config. It also creates the temporary data, and log directories which must
// be cleaned up with a call to cleanup().
func (n *nodeConfig) setDefaults() (e error) {
datadir, e := ioutil.TempDir("", n.prefix+"-data")
if e != nil {
return e
}
n.dataDir = datadir
logdir, e := ioutil.TempDir("", n.prefix+"-logs")
if e != nil {
return e
}
n.logDir = logdir
cert, e := ioutil.ReadFile(n.certFile)
if e != nil {
return e
}
n.certificates = cert
return nil
}
// arguments returns an array of arguments that be used to launch the pod process.
func (n *nodeConfig) arguments() []string {
args := []string{}
if n.rpcUser != "" {
// --rpcuser
args = append(args, fmt.Sprintf("--rpcuser=%s", n.rpcUser))
}
if n.rpcPass != "" {
// --rpcpass
args = append(args, fmt.Sprintf("--rpcpass=%s", n.rpcPass))
}
if n.listen != "" {
// --listen
args = append(args, fmt.Sprintf("--listen=%s", n.listen))
}
if n.rpcListen != "" {
// --rpclisten
args = append(args, fmt.Sprintf("--rpclisten=%s", n.rpcListen))
}
if n.rpcConnect != "" {
// --rpcconnect
args = append(args, fmt.Sprintf("--rpcconnect=%s", n.rpcConnect))
}
// --rpccert
args = append(args, fmt.Sprintf("--rpccert=%s", n.certFile))
// --rpckey
args = append(args, fmt.Sprintf("--rpckey=%s", n.keyFile))
if n.dataDir != "" {
// --datadir
args = append(args, fmt.Sprintf("--datadir=%s", n.dataDir))
}
if n.logDir != "" {
// --logdir
args = append(args, fmt.Sprintf("--logdir=%s", n.logDir))
}
if n.profile != "" {
// --profile
args = append(args, fmt.Sprintf("--profile=%s", n.profile))
}
if n.debugLevel != "" {
// --debuglevel
args = append(args, fmt.Sprintf("--debuglevel=%s", n.debugLevel))
}
args = append(args, n.extra...)
return args
}
// command returns the exec.Cmd which will be used to start the pod process.
func (n *nodeConfig) command() *exec.Cmd {
return exec.Command(n.exe, n.arguments()...)
}
// rpcConnConfig returns the rpc connection config that can be used to connect to the pod process that is launched via
// Start().
func (n *nodeConfig) rpcConnConfig() rpc.ConnConfig {
return rpc.ConnConfig{
Host: n.rpcListen,
Endpoint: n.endpoint,
User: n.rpcUser,
Pass: n.rpcPass,
Certificates: n.certificates,
DisableAutoReconnect: true,
}
}
// String returns the string representation of this nodeConfig.
func (n *nodeConfig) String() string {
return n.prefix
}
// cleanup removes the tmp data and log directories.
func (n *nodeConfig) cleanup() (e error) {
dirs := []string{
n.logDir,
n.dataDir,
}
for _, dir := range dirs {
if e = os.RemoveAll(dir); E.Chk(e) {
E.F("Cannot remove dir %s: %v", dir, e)
}
}
return e
}
// node houses the necessary state required to configure, launch, and manage a pod process.
type node struct {
config *nodeConfig
cmd *exec.Cmd
pidFile string
dataDir string
}
// newNode creates a new node instance according to the passed config. dataDir will be used to hold a file recording the
// pid of the launched process, and as the base for the log and data directories for pod.
func newNode(config *nodeConfig, dataDir string) (*node, error) {
return &node{
config: config,
dataDir: dataDir,
cmd: config.command(),
}, nil
}
// start creates a new pod process and writes its pid in a file reserved for recording the pid of the launched process.
// This file can be used to terminate the process in case of a hang or panic. In the case of a failing test case, or
// panic, it is important that the process be stopped via stop( ) otherwise it will persist unless explicitly killed.
func (n *node) start() (e error) {
if e = n.cmd.Start(); E.Chk(e) {
return e
}
pid, e := os.Create(
filepath.Join(
n.dataDir,
fmt.Sprintf("%s.pid", n.config),
),
)
if e != nil {
return e
}
n.pidFile = pid.Name()
if _, e = fmt.Fprintf(pid, "%d\n", n.cmd.Process.Pid); E.Chk(e) {
return e
}
if e = pid.Close(); E.Chk(e) {
return e
}
return nil
}
// stop interrupts the running pod process process, and waits until it exits properly. On windows, interrupt is not
// supported so a kill signal is used instead
func (n *node) stop() (e error) {
if n.cmd == nil || n.cmd.Process == nil {
// return if not properly initialized or error starting the process
return nil
}
defer func() {
e := n.cmd.Wait()
if e != nil {
}
}()
if runtime.GOOS == "windows" {
return n.cmd.Process.Signal(os.Kill)
}
return n.cmd.Process.Signal(os.Interrupt)
}
// cleanup cleanups process and args files. The file housing the pid of the created process will be deleted as well as
// any directories created by the process.
func (n *node) cleanup() (e error) {
if n.pidFile != "" {
if e := os.Remove(n.pidFile); E.Chk(e) {
E.F("unable to remove file %s: %v", n.pidFile, e)
}
}
return n.config.cleanup()
}
// shutdown terminates the running pod process and cleans up all file/directories created by node.
func (n *node) shutdown() (e error) {
if e := n.stop(); E.Chk(e) {
return e
}
if e := n.cleanup(); E.Chk(e) {
return e
}
return nil
}
// genCertPair generates a key/cert pair to the paths provided.
func genCertPair(certFile, keyFile string) (e error) {
org := "rpctest autogenerated cert"
validUntil := time.Now().Add(10 * 365 * 24 * time.Hour)
var key []byte
var cert []byte
cert, key, e = util.NewTLSCertPair(org, validUntil, nil)
if e != nil {
return e
}
// Write cert and key files.
if e = ioutil.WriteFile(certFile, cert, 0666); E.Chk(e) {
return e
}
if e = ioutil.WriteFile(keyFile, key, 0600); E.Chk(e) {
defer func() {
if e = os.Remove(certFile); E.Chk(e) {
}
}()
return e
}
return nil
}

View File

@@ -0,0 +1,405 @@
package rpctest
import (
"fmt"
"io/ioutil"
"net"
"os"
"path/filepath"
"strconv"
"sync"
"testing"
"time"
"github.com/p9c/p9/pkg/amt"
"github.com/p9c/p9/pkg/block"
"github.com/p9c/p9/pkg/btcaddr"
"github.com/p9c/p9/pkg/chaincfg"
"github.com/p9c/p9/pkg/qu"
"github.com/p9c/p9/pkg/chainhash"
"github.com/p9c/p9/pkg/rpcclient"
"github.com/p9c/p9/pkg/util"
"github.com/p9c/p9/pkg/wire"
)
const (
// These constants define the minimum and maximum p2p and rpc port numbers used by a test harness. The min port is
// inclusive while the max port is exclusive.
minPeerPort = 10000
maxPeerPort = 35000
minRPCPort = maxPeerPort
maxRPCPort = 60000
// BlockVersion is the default block version used when generating blocks.
BlockVersion = 4
)
var (
// current number of active test nodes.
numTestInstances = 0
// processID is the process ID of the current running process, it is used to calculate ports based upon it when
// launching an rpc harnesses. The intent is to allow multiple process to run in parallel without port collisions.
// It should be noted however that there is still some small probability that there will be port collisions either
// due to other processes running or simply due to the stars aligning on the process IDs.
processID = os.Getpid()
// testInstances is a private package-level slice used to keep track of all active test harnesses. This global can
// be used to perform various "joins" shutdown several active harnesses after a test, etc.
testInstances = make(map[string]*Harness)
// Used to protest concurrent access to above declared variables.
harnessStateMtx sync.RWMutex
)
// HarnessTestCase represents a test-case which utilizes an instance of the Harness to exercise functionality.
type HarnessTestCase func(r *Harness, t *testing.T)
// Harness fully encapsulates an active pod process to provide a unified platform for creating rpc driven integration
// tests involving pod. The active pod node will typically be run in simnet mode in order to allow for easy generation
// of test blockchains. The active pod process is fully managed by Harness, which handles the necessary initialization,
// and teardown of the process along with any temporary directories created as a result. Multiple Harness instances may
// be run concurrently, in order to allow for testing complex scenarios involving multiple nodes. The harness also
// includes an in-memory wallet to streamline various classes of tests.
type Harness struct {
// ActiveNet is the parameters of the blockchain the Harness belongs to.
ActiveNet *chaincfg.Params
Node *rpcclient.Client
node *node
handlers *rpcclient.NotificationHandlers
wallet *memWallet
testNodeDir string
maxConnRetries int
nodeNum int
sync.Mutex
}
// New creates and initializes new instance of the rpc test harness. Optionally, websocket handlers and a specified
// configuration may be passed. In the case that a nil config is passed, a default configuration will be used. NOTE:
// This function is safe for concurrent access.
func New(
activeNet *chaincfg.Params, handlers *rpcclient.NotificationHandlers,
extraArgs []string,
) (*Harness, error) {
harnessStateMtx.Lock()
defer harnessStateMtx.Unlock()
// Add a flag for the appropriate network type based on the provided chain chaincfg.
switch activeNet.Net {
case wire.MainNet:
// No extra flags since mainnet is the default
case wire.TestNet3:
extraArgs = append(extraArgs, "--testnet")
case wire.TestNet:
extraArgs = append(extraArgs, "--regtest")
case wire.SimNet:
extraArgs = append(extraArgs, "--simnet")
default:
return nil, fmt.Errorf(
"rpctest.New must be called with one " +
"of the supported chain networks",
)
}
testDir, e := baseDir()
if e != nil {
return nil, e
}
harnessID := strconv.Itoa(numTestInstances)
nodeTestData, e := ioutil.TempDir(testDir, "harness-"+harnessID)
if e != nil {
return nil, e
}
certFile := filepath.Join(nodeTestData, "rpc.cert")
keyFile := filepath.Join(nodeTestData, "rpc.key")
if e = genCertPair(certFile, keyFile); E.Chk(e) {
return nil, e
}
wallet, e := newMemWallet(activeNet, uint32(numTestInstances))
if e != nil {
return nil, e
}
miningAddr := fmt.Sprintf("--miningaddr=%s", wallet.coinbaseAddr)
extraArgs = append(extraArgs, miningAddr)
config, e := newConfig("rpctest", certFile, keyFile, extraArgs)
if e != nil {
return nil, e
}
// Generate p2p+rpc listening addresses.
config.listen, config.rpcListen = generateListeningAddresses()
// Create the testing node bounded to the simnet.
node, e := newNode(config, nodeTestData)
if e != nil {
return nil, e
}
nodeNum := numTestInstances
numTestInstances++
if handlers == nil {
handlers = &rpcclient.NotificationHandlers{}
}
// If a handler for the OnFilteredBlock{Connected, Disconnected} callback callback has already been set, then create
// a wrapper callback which executes both the currently registered callback and the mem wallet's callback.
if handlers.OnFilteredBlockConnected != nil {
obc := handlers.OnFilteredBlockConnected
handlers.OnFilteredBlockConnected = func(height int32, header *wire.BlockHeader, filteredTxns []*util.Tx) {
wallet.IngestBlock(height, header, filteredTxns)
obc(height, header, filteredTxns)
}
} else {
// Otherwise, we can claim the callback ourselves.
handlers.OnFilteredBlockConnected = wallet.IngestBlock
}
if handlers.OnFilteredBlockDisconnected != nil {
obd := handlers.OnFilteredBlockDisconnected
handlers.OnFilteredBlockDisconnected = func(height int32, header *wire.BlockHeader) {
wallet.UnwindBlock(height, header)
obd(height, header)
}
} else {
handlers.OnFilteredBlockDisconnected = wallet.UnwindBlock
}
h := &Harness{
handlers: handlers,
node: node,
maxConnRetries: 20,
testNodeDir: nodeTestData,
ActiveNet: activeNet,
nodeNum: nodeNum,
wallet: wallet,
}
// Track this newly created test instance within the package level global map of all active test instances.
testInstances[h.testNodeDir] = h
return h, nil
}
// SetUp initializes the rpc test state. Initialization includes: starting up a simnet node, creating a websockets
// client and connecting to the started node, and finally: optionally generating and submitting a testchain with a
// configurable number of mature coinbase outputs coinbase outputs. NOTE: This method and TearDown should always be
// called from the same goroutine as they are not concurrent safe.
func (h *Harness) SetUp(createTestChain bool, numMatureOutputs uint32) (e error) {
// Start the pod node itself. This spawns a new process which will be managed
if e = h.node.start(); E.Chk(e) {
return e
}
if e = h.connectRPCClient(); E.Chk(e) {
return e
}
h.wallet.Start()
// Filter transactions that pay to the coinbase associated with the wallet.
filterAddrs := []btcaddr.Address{h.wallet.coinbaseAddr}
if e = h.Node.LoadTxFilter(true, filterAddrs, nil); E.Chk(e) {
return e
}
// Ensure pod properly dispatches our registered call-back for each new block. Otherwise, the memWallet won't
// function properly.
if e = h.Node.NotifyBlocks(); E.Chk(e) {
return e
}
// Create a test chain with the desired number of mature coinbase outputs.
if createTestChain && numMatureOutputs != 0 {
numToGenerate := uint32(h.ActiveNet.CoinbaseMaturity) +
numMatureOutputs
_, e = h.Node.Generate(numToGenerate)
if e != nil {
return e
}
}
// Block until the wallet has fully synced up to the tip of the main chain.
_, height, e := h.Node.GetBestBlock()
if e != nil {
return e
}
ticker := time.NewTicker(time.Millisecond * 100)
for range ticker.C {
walletHeight := h.wallet.SyncedHeight()
if walletHeight == height {
break
}
}
ticker.Stop()
return nil
}
// tearDown stops the running rpc test instance. All created processes are killed, and temporary directories removed.
// This function MUST be called with the harness state mutex held (for writes).
func (h *Harness) tearDown() (e error) {
if h.Node != nil {
h.Node.Shutdown()
}
if e := h.node.shutdown(); E.Chk(e) {
return e
}
if e := os.RemoveAll(h.testNodeDir); E.Chk(e) {
return e
}
delete(testInstances, h.testNodeDir)
return nil
}
// TearDown stops the running rpc test instance. All created processes are killed, and temporary directories removed.
// NOTE: This method and SetUp should always be called from the same goroutine as they are not concurrent safe.
func (h *Harness) TearDown() (e error) {
harnessStateMtx.Lock()
defer harnessStateMtx.Unlock()
return h.tearDown()
}
// connectRPCClient attempts to establish an RPC connection to the created pod process belonging to this Harness
// instance. If the initial connection attempt fails, this function will retry h. maxConnRetries times, backing off the
// time between subsequent attempts. If after h.maxConnRetries attempts we're not able to establish a connection, this
// function returns with an error.
func (h *Harness) connectRPCClient() (e error) {
var client *rpcclient.Client
rpcConf := h.node.config.rpcConnConfig()
for i := 0; i < h.maxConnRetries; i++ {
if client, e = rpcclient.New(&rpcConf, h.handlers, qu.T()); E.Chk(e) {
time.Sleep(time.Duration(i) * 50 * time.Millisecond)
continue
}
break
}
if client == nil {
return fmt.Errorf("connection timeout")
}
h.Node = client
h.wallet.SetRPCClient(client)
return nil
}
// NewAddress returns a fresh address spendable by the Harness' internal wallet. This function is safe for concurrent
// access.
func (h *Harness) NewAddress() (btcaddr.Address, error) {
return h.wallet.NewAddress()
}
// ConfirmedBalance returns the confirmed balance of the Harness' internal wallet. This function is safe for concurrent
// access.
func (h *Harness) ConfirmedBalance() amt.Amount {
return h.wallet.ConfirmedBalance()
}
// SendOutputs creates signs and finally broadcasts a transaction spending the harness' available mature coinbase
// outputs creating new outputs according to targetOutputs. This function is safe for concurrent access.
func (h *Harness) SendOutputs(
targetOutputs []*wire.TxOut,
feeRate amt.Amount,
) (*chainhash.Hash, error) {
return h.wallet.SendOutputs(targetOutputs, feeRate)
}
// SendOutputsWithoutChange creates and sends a transaction that pays to the specified outputs while observing the
// passed fee rate and ignoring a change output. The passed fee rate should be expressed in sat/b. This function is safe
// for concurrent access.
func (h *Harness) SendOutputsWithoutChange(
targetOutputs []*wire.TxOut,
feeRate amt.Amount,
) (*chainhash.Hash, error) {
return h.wallet.SendOutputsWithoutChange(targetOutputs, feeRate)
}
// CreateTransaction returns a fully signed transaction paying to the specified outputs while observing the desired fee
// rate. The passed fee rate should be expressed in satoshis-per-byte. The transaction being created can optionally
// include a change output indicated by the change boolean. Any unspent outputs selected as inputs for the crafted
// transaction are marked as unspendable in order to avoid potential double-spends by future calls to this method. If
// the created transaction is cancelled for any reason then the selected inputs MUST be freed via a call to
// UnlockOutputs. Otherwise, the locked inputs won't be returned to the pool of spendable outputs. This function is safe
// for concurrent access.
func (h *Harness) CreateTransaction(
targetOutputs []*wire.TxOut,
feeRate amt.Amount, change bool,
) (*wire.MsgTx, error) {
return h.wallet.CreateTransaction(targetOutputs, feeRate, change)
}
// UnlockOutputs unlocks any outputs which were previously marked as unspendabe due to being selected to fund a
// transaction via the CreateTransaction method. This function is safe for concurrent access.
func (h *Harness) UnlockOutputs(inputs []*wire.TxIn) {
h.wallet.UnlockOutputs(inputs)
}
// RPCConfig returns the harnesses current rpc configuration. This allows other potential RPC clients created within
// tests to connect to a given test harness instance.
func (h *Harness) RPCConfig() rpcclient.ConnConfig {
return h.node.config.rpcConnConfig()
}
// P2PAddress returns the harness' P2P listening address. This allows potential peers ( such as SPV peers) created
// within tests to connect to a given test harness instance.
func (h *Harness) P2PAddress() string {
return h.node.config.listen
}
// GenerateAndSubmitBlock creates a block whose contents include the passed transactions and submits it to the running
// simnet node. For generating blocks with only a coinbase tx, callers can simply pass nil instead of transactions to be
// mined. Additionally, a custom block version can be set by the caller. A blockVersion of -1 indicates that the current
// default block version should be used. An uninitialized time.Time should be used for the blockTime parameter if one
// doesn't wish to set a custom time. This function is safe for concurrent access.
func (h *Harness) GenerateAndSubmitBlock(
txns []*util.Tx, blockVersion uint32,
blockTime time.Time,
) (*block.Block, error) {
return h.GenerateAndSubmitBlockWithCustomCoinbaseOutputs(
txns,
blockVersion, blockTime, []wire.TxOut{},
)
}
// GenerateAndSubmitBlockWithCustomCoinbaseOutputs creates a block whose contents include the passed coinbase outputs
// and transactions and submits it to the running simnet node. For generating blocks with only a coinbase tx, callers
// can simply pass nil instead of transactions to be mined. Additionally, a custom block version can be set by the
// caller. A blockVersion of -1 indicates that the current default block version should be used. An uninitialized
// time.Time should be used for the blockTime parameter if one doesn't wish to set a custom time. The mineTo list of
// outputs will be added to the coinbase; this is not checked for correctness until the block is submitted; thus, it is
// the caller's responsibility to ensure that the outputs are correct. If the list is empty, the coinbase reward goes to
// the wallet managed by the Harness. This function is safe for concurrent access.
func (h *Harness) GenerateAndSubmitBlockWithCustomCoinbaseOutputs(
txns []*util.Tx, blockVersion uint32, blockTime time.Time,
mineTo []wire.TxOut,
) (*block.Block, error) {
h.Lock()
defer h.Unlock()
if blockVersion == ^uint32(0) {
blockVersion = BlockVersion
}
prevBlockHash, prevBlockHeight, e := h.Node.GetBestBlock()
if e != nil {
return nil, e
}
mBlock, e := h.Node.GetBlock(prevBlockHash)
if e != nil {
return nil, e
}
prevBlock := block.NewBlock(mBlock)
prevBlock.SetHeight(prevBlockHeight)
// Create a new block including the specified transactions
newBlock, e := CreateBlock(
prevBlock, txns, int32(blockVersion),
blockTime, h.wallet.coinbaseAddr, mineTo, h.ActiveNet,
)
if e != nil {
return nil, e
}
// Submit the block to the simnet node.
if e := h.Node.SubmitBlock(newBlock, nil); E.Chk(e) {
return nil, e
}
return newBlock, nil
}
// generateListeningAddresses returns two strings representing listening addresses designated for the current rpc test.
// If there haven't been any test instances created, the default ports are used. Otherwise in order to support multiple
// test nodes running at once the p2p and rpc port are incremented after each initialization.
func generateListeningAddresses() (string, string) {
localhost := "127.0.0.1"
portString := func(minPort, maxPort int) string {
port := minPort + numTestInstances + ((20 * processID) %
(maxPort - minPort))
return strconv.Itoa(port)
}
p2p := net.JoinHostPort(localhost, portString(minPeerPort, maxPeerPort))
rpc := net.JoinHostPort(localhost, portString(minRPCPort, maxRPCPort))
return p2p, rpc
}
// baseDir is the directory path of the temp directory for all rpctest files.
func baseDir() (string, error) {
dirPath := filepath.Join(os.TempDir(), "pod", "rpctest")
e := os.MkdirAll(dirPath, 0755)
return dirPath, e
}

View File

@@ -0,0 +1,594 @@
package rpctest
import (
"fmt"
"os"
"testing"
"time"
"github.com/p9c/p9/pkg/amt"
"github.com/p9c/p9/pkg/btcaddr"
"github.com/p9c/p9/pkg/qu"
"github.com/p9c/p9/pkg/chaincfg"
"github.com/p9c/p9/pkg/chainhash"
"github.com/p9c/p9/pkg/txscript"
"github.com/p9c/p9/pkg/util"
"github.com/p9c/p9/pkg/wire"
)
func testSendOutputs(r *Harness, t *testing.T) {
genSpend := func(amt amt.Amount) *chainhash.Hash {
// Grab a fresh address from the wallet.
addr, e := r.NewAddress()
if e != nil {
t.Fatalf("unable to get new address: %v", e)
}
// Next, send amt DUO to this address,
// spending from one of our mature coinbase outputs.
addrScript, e := txscript.PayToAddrScript(addr)
if e != nil {
t.Fatalf("unable to generate pkscript to addr: %v", e)
}
output := wire.NewTxOut(int64(amt), addrScript)
txid, e := r.SendOutputs([]*wire.TxOut{output}, 10)
if e != nil {
t.Fatalf("coinbase spend failed: %v", e)
}
return txid
}
assertTxMined := func(txid *chainhash.Hash, blockHash *chainhash.Hash) {
block, e := r.Node.GetBlock(blockHash)
if e != nil {
t.Fatalf("unable to get block: %v", e)
}
numBlockTxns := len(block.Transactions)
if numBlockTxns < 2 {
t.Fatalf(
"crafted transaction wasn't mined, block should have "+
"at least %v transactions instead has %v", 2, numBlockTxns,
)
}
minedTx := block.Transactions[1]
txHash := minedTx.TxHash()
if txHash != *txid {
t.Fatalf("txid's don't match, %v vs %v", txHash, txid)
}
}
// First, generate a small spend which will require only a single input.
txid := genSpend(5 * amt.SatoshiPerBitcoin)
// Generate a single block, the transaction the wallet created should be found in this block.
blockHashes, e := r.Node.Generate(1)
if e != nil {
t.Fatalf("unable to generate single block: %v", e)
}
assertTxMined(txid, blockHashes[0])
// Next, generate a spend much greater than the block reward. This transaction should also have been mined properly.
txid = genSpend(500 * amt.SatoshiPerBitcoin)
blockHashes, e = r.Node.Generate(1)
if e != nil {
t.Fatalf("unable to generate single block: %v", e)
}
assertTxMined(txid, blockHashes[0])
}
func assertConnectedTo(t *testing.T, nodeA *Harness, nodeB *Harness) {
nodeAPeers, e := nodeA.Node.GetPeerInfo()
if e != nil {
t.Fatalf("unable to get nodeA's peer info")
}
nodeAddr := nodeB.node.config.listen
addrFound := false
for _, peerInfo := range nodeAPeers {
if peerInfo.Addr == nodeAddr {
addrFound = true
break
}
}
if !addrFound {
t.Fatal("nodeA not connected to nodeB")
}
}
func testConnectNode(r *Harness, t *testing.T) {
// Create a fresh test harness.
harness, e := New(&chaincfg.SimNetParams, nil, nil)
if e != nil {
t.Fatal(e)
}
if e := harness.SetUp(false, 0); E.Chk(e) {
t.Fatalf("unable to complete rpctest setup: %v", e)
}
defer func() {
if e := harness.TearDown(); E.Chk(e) {
}
}()
// Establish a p2p connection from our new local harness to the main harness.
if e := ConnectNode(harness, r); E.Chk(e) {
t.Fatalf("unable to connect local to main harness: %v", e)
}
// The main harness should show up in our local harness' peer's list, and vice verse.
assertConnectedTo(t, harness, r)
}
func testTearDownAll(t *testing.T) {
// Grab a local copy of the currently active harnesses before attempting to tear them all down.
initialActiveHarnesses := ActiveHarnesses()
// Tear down all currently active harnesses.
if e := TearDownAll(); E.Chk(e) {
t.Fatalf("unable to teardown all harnesses: %v", e)
}
// The global testInstances map should now be fully purged with no active test harnesses remaining.
if len(ActiveHarnesses()) != 0 {
t.Fatalf("test harnesses still active after TearDownAll")
}
for _, harness := range initialActiveHarnesses {
// Ensure all test directories have been deleted.
var e error
if _, e = os.Stat(harness.testNodeDir); e == nil {
t.Errorf("created test datadir was not deleted.")
}
}
}
func testActiveHarnesses(r *Harness, t *testing.T) {
numInitialHarnesses := len(ActiveHarnesses())
// Create a single test harness.
harness1, e := New(&chaincfg.SimNetParams, nil, nil)
if e != nil {
t.Fatal(e)
}
defer func() {
if e := harness1.TearDown(); E.Chk(e) {
}
}()
// With the harness created above, a single harness should be detected as active.
numActiveHarnesses := len(ActiveHarnesses())
if !(numActiveHarnesses > numInitialHarnesses) {
t.Fatalf(
"ActiveHarnesses not updated, should have an " +
"additional test harness listed.",
)
}
}
func testJoinMempools(r *Harness, t *testing.T) {
// Assert main test harness has no transactions in its mempool.
pooledHashes, e := r.Node.GetRawMempool()
if e != nil {
t.Fatalf("unable to get mempool for main test harness: %v", e)
}
if len(pooledHashes) != 0 {
t.Fatal("main test harness mempool not empty")
}
// Create a local test harness with only the genesis block. The nodes will be synced below so the same transaction
// can be sent to both nodes without it being an orphan.
var harness *Harness
harness, e = New(&chaincfg.SimNetParams, nil, nil)
if e != nil {
t.Fatal(e)
}
if e = harness.SetUp(false, 0); E.Chk(e) {
t.Fatalf("unable to complete rpctest setup: %v", e)
}
defer func() {
if e = harness.TearDown(); E.Chk(e) {
}
}()
nodeSlice := []*Harness{r, harness}
// Both mempools should be considered synced as they are empty. Therefore, this should return instantly.
if e = JoinNodes(nodeSlice, Mempools); E.Chk(e) {
t.Fatalf("unable to join node on mempools: %v", e)
}
// Generate a coinbase spend to a new address within the main harness' mempool.
var addr btcaddr.Address
if addr, e = r.NewAddress(); E.Chk(e) {
}
var addrScript []byte
addrScript, e = txscript.PayToAddrScript(addr)
if e != nil {
t.Fatalf("unable to generate pkscript to addr: %v", e)
}
output := wire.NewTxOut(5e8, addrScript)
var testTx *wire.MsgTx
testTx, e = r.CreateTransaction([]*wire.TxOut{output}, 10, true)
if e != nil {
t.Fatalf("coinbase spend failed: %v", e)
}
if _, e = r.Node.SendRawTransaction(testTx, true); E.Chk(e) {
t.Fatalf("send transaction failed: %v", e)
}
// Wait until the transaction shows up to ensure the two mempools are not the same.
harnessSynced := qu.T()
go func() {
for {
var poolHashes []*chainhash.Hash
poolHashes, e = r.Node.GetRawMempool()
if e != nil {
t.Fatalf("failed to retrieve harness mempool: %v", e)
}
if len(poolHashes) > 0 {
break
}
time.Sleep(time.Millisecond * 100)
}
harnessSynced <- struct{}{}
}()
select {
case <-harnessSynced.Wait():
case <-time.After(time.Minute):
t.Fatalf("harness node never received transaction")
}
// This select case should fall through to the default as the goroutine should be blocked on the JoinNodes call.
poolsSynced := qu.T()
go func() {
if e = JoinNodes(nodeSlice, Mempools); E.Chk(e) {
t.Fatalf("unable to join node on mempools: %v", e)
}
poolsSynced <- struct{}{}
}()
select {
case <-poolsSynced.Wait():
t.Fatalf("mempools detected as synced yet harness has a new tx")
default:
}
// Establish an outbound connection from the local harness to the main harness and wait for the chains to be synced.
if e = ConnectNode(harness, r); E.Chk(e) {
t.Fatalf("unable to connect harnesses: %v", e)
}
if e = JoinNodes(nodeSlice, Blocks); E.Chk(e) {
t.Fatalf("unable to join node on blocks: %v", e)
}
// Send the transaction to the local harness which will result in synced mempools.
if _, e = harness.Node.SendRawTransaction(testTx, true); E.Chk(e) {
t.Fatalf("send transaction failed: %v", e)
}
// Select once again with a special timeout case after 1 minute. The goroutine above should now be blocked on
// sending into the unbuffered channel. The send should immediately succeed. In order to avoid the test hanging
// indefinitely, a 1 minute timeout is in place.
select {
case <-poolsSynced.Wait():
case <-time.After(time.Minute):
t.Fatalf("mempools never detected as synced")
}
}
func testJoinBlocks(r *Harness, t *testing.T) {
// Create a second harness with only the genesis block so it is behind the main harness.
harness, e := New(&chaincfg.SimNetParams, nil, nil)
if e != nil {
t.Fatal(e)
}
if e := harness.SetUp(false, 0); E.Chk(e) {
t.Fatalf("unable to complete rpctest setup: %v", e)
}
defer func() {
if e := harness.TearDown(); E.Chk(e) {
}
}()
nodeSlice := []*Harness{r, harness}
blocksSynced := qu.T()
go func() {
if e := JoinNodes(nodeSlice, Blocks); E.Chk(e) {
t.Fatalf("unable to join node on blocks: %v", e)
}
blocksSynced <- struct{}{}
}()
// This select case should fall through to the default as the goroutine should be blocked on the JoinNodes calls.
select {
case <-blocksSynced.Wait():
t.Fatalf("blocks detected as synced yet local harness is behind")
default:
}
// Connect the local harness to the main harness which will sync the chains.
if e := ConnectNode(harness, r); E.Chk(e) {
t.Fatalf("unable to connect harnesses: %v", e)
}
// Select once again with a special timeout case after 1 minute. The goroutine above should now be blocked on
// sending into the unbuffered channel. The send should immediately succeed. In order to avoid the test hanging
// indefinitely, a 1 minute timeout is in place.
select {
case <-blocksSynced.Wait():
case <-time.After(time.Minute):
t.Fatalf("blocks never detected as synced")
}
}
func testGenerateAndSubmitBlock(r *Harness, t *testing.T) {
// Generate a few test spend transactions.
addr, e := r.NewAddress()
if e != nil {
t.Fatalf("unable to generate new address: %v", e)
}
pkScript, e := txscript.PayToAddrScript(addr)
if e != nil {
t.Fatalf("unable to create script: %v", e)
}
output := wire.NewTxOut(amt.SatoshiPerBitcoin.Int64(), pkScript)
const numTxns = 5
txns := make([]*util.Tx, 0, numTxns)
var tx *wire.MsgTx
for i := 0; i < numTxns; i++ {
tx, e = r.CreateTransaction([]*wire.TxOut{output}, 10, true)
if e != nil {
t.Fatalf("unable to create tx: %v", e)
}
txns = append(txns, util.NewTx(tx))
}
// Now generate a block with the default block version, and a zeroed out time.
block, e := r.GenerateAndSubmitBlock(txns, ^uint32(0), time.Time{})
if e != nil {
t.Fatalf("unable to generate block: %v", e)
}
// Ensure that all created transactions were included, and that the block version was properly set to the default.
numBlocksTxns := len(block.Transactions())
if numBlocksTxns != numTxns+1 {
t.Fatalf(
"block did not include all transactions: "+
"expected %v, got %v", numTxns+1, numBlocksTxns,
)
}
blockVersion := block.WireBlock().Header.Version
if blockVersion != BlockVersion {
t.Fatalf(
"block version is not default: expected %v, got %v",
BlockVersion, blockVersion,
)
}
// Next generate a block with a "non-standard" block version along with time stamp a minute after the previous
// block's timestamp.
timestamp := block.WireBlock().Header.Timestamp.Add(time.Minute)
targetBlockVersion := uint32(1337)
block, e = r.GenerateAndSubmitBlock(nil, targetBlockVersion, timestamp)
if e != nil {
t.Fatalf("unable to generate block: %v", e)
}
// Finally ensure that the desired block version and timestamp were set properly.
header := block.WireBlock().Header
blockVersion = header.Version
if blockVersion != int32(targetBlockVersion) {
t.Fatalf(
"block version mismatch: expected %v, got %v",
targetBlockVersion, blockVersion,
)
}
if !timestamp.Equal(header.Timestamp) {
t.Fatalf(
"header time stamp mismatch: expected %v, got %v",
timestamp, header.Timestamp,
)
}
}
func testGenerateAndSubmitBlockWithCustomCoinbaseOutputs(
r *Harness,
t *testing.T,
) {
// Generate a few test spend transactions.
addr, e := r.NewAddress()
if e != nil {
t.Fatalf("unable to generate new address: %v", e)
}
pkScript, e := txscript.PayToAddrScript(addr)
if e != nil {
t.Fatalf("unable to create script: %v", e)
}
output := wire.NewTxOut(amt.SatoshiPerBitcoin.Int64(), pkScript)
const numTxns = 5
txns := make([]*util.Tx, 0, numTxns)
for i := 0; i < numTxns; i++ {
var tx *wire.MsgTx
tx, e = r.CreateTransaction([]*wire.TxOut{output}, 10, true)
if e != nil {
t.Fatalf("unable to create tx: %v", e)
}
txns = append(txns, util.NewTx(tx))
}
// Now generate a block with the default block version, a zero'd out time, and a burn output.
block, e := r.GenerateAndSubmitBlockWithCustomCoinbaseOutputs(
txns,
^uint32(0), time.Time{}, []wire.TxOut{
{
Value: 0,
PkScript: []byte{},
},
},
)
if e != nil {
t.Fatalf("unable to generate block: %v", e)
}
// Ensure that all created transactions were included, and that the block version was properly set to the default.
numBlocksTxns := len(block.Transactions())
if numBlocksTxns != numTxns+1 {
t.Fatalf(
"block did not include all transactions: "+
"expected %v, got %v", numTxns+1, numBlocksTxns,
)
}
blockVersion := block.WireBlock().Header.Version
if blockVersion != BlockVersion {
t.Fatalf(
"block version is not default: expected %v, got %v",
BlockVersion, blockVersion,
)
}
// Next generate a block with a "non-standard" block version along with time stamp a minute after the previous
// block's timestamp.
timestamp := block.WireBlock().Header.Timestamp.Add(time.Minute)
targetBlockVersion := uint32(1337)
block, e = r.GenerateAndSubmitBlockWithCustomCoinbaseOutputs(
nil,
targetBlockVersion, timestamp, []wire.TxOut{
{
Value: 0,
PkScript: []byte{},
},
},
)
if e != nil {
t.Fatalf("unable to generate block: %v", e)
}
// Finally ensure that the desired block version and timestamp were set properly.
header := block.WireBlock().Header
blockVersion = header.Version
if blockVersion != int32(targetBlockVersion) {
t.Fatalf(
"block version mismatch: expected %v, got %v",
targetBlockVersion, blockVersion,
)
}
if !timestamp.Equal(header.Timestamp) {
t.Fatalf(
"header time stamp mismatch: expected %v, got %v",
timestamp, header.Timestamp,
)
}
}
func testMemWalletReorg(r *Harness, t *testing.T) {
// Create a fresh harness, we'll be using the main harness to force a re-org on this local harness.
harness, e := New(&chaincfg.SimNetParams, nil, nil)
if e != nil {
t.Fatal(e)
}
if e := harness.SetUp(true, 5); E.Chk(e) {
t.Fatalf("unable to complete rpctest setup: %v", e)
}
defer func() {
if e := harness.TearDown(); E.Chk(e) {
}
}()
// The internal wallet of this harness should now have 250 DUO.
expectedBalance := 250 * amt.SatoshiPerBitcoin
walletBalance := harness.ConfirmedBalance()
if expectedBalance != walletBalance {
t.Fatalf(
"wallet balance incorrect: expected %v, got %v",
expectedBalance, walletBalance,
)
}
// Now connect this local harness to the main harness then wait for their chains to synchronize.
if e := ConnectNode(harness, r); E.Chk(e) {
t.Fatalf("unable to connect harnesses: %v", e)
}
nodeSlice := []*Harness{r, harness}
if e := JoinNodes(nodeSlice, Blocks); E.Chk(e) {
t.Fatalf("unable to join node on blocks: %v", e)
}
// The original wallet should now have a balance of 0 DUO as its entire chain should have been decimated in favor of
// the main harness' chain.
expectedBalance = amt.Amount(0)
walletBalance = harness.ConfirmedBalance()
if expectedBalance != walletBalance {
t.Fatalf(
"wallet balance incorrect: expected %v, got %v",
expectedBalance, walletBalance,
)
}
}
func testMemWalletLockedOutputs(r *Harness, t *testing.T) {
// Obtain the initial balance of the wallet at this point.
startingBalance := r.ConfirmedBalance()
// First, create a signed transaction spending some outputs.
addr, e := r.NewAddress()
if e != nil {
t.Fatalf("unable to generate new address: %v", e)
}
pkScript, e := txscript.PayToAddrScript(addr)
if e != nil {
t.Fatalf("unable to create script: %v", e)
}
outputAmt := 50 * amt.SatoshiPerBitcoin
output := wire.NewTxOut(int64(outputAmt), pkScript)
tx, e := r.CreateTransaction([]*wire.TxOut{output}, 10, true)
if e != nil {
t.Fatalf("unable to create transaction: %v", e)
}
// The current wallet balance should now be at least 50 DUO less (accounting for fees) than the period balance
currentBalance := r.ConfirmedBalance()
if !(currentBalance <= startingBalance-outputAmt) {
t.Fatalf(
"spent outputs not locked: previous balance %v, "+
"current balance %v", startingBalance, currentBalance,
)
}
// Now unlocked all the spent inputs within the unbroadcast signed transaction. The current balance should now be
// exactly that of the starting balance.
r.UnlockOutputs(tx.TxIn)
currentBalance = r.ConfirmedBalance()
if currentBalance != startingBalance {
t.Fatalf(
"current and starting balance should now match: "+
"expected %v, got %v", startingBalance, currentBalance,
)
}
}
var harnessTestCases = []HarnessTestCase{
testSendOutputs,
testConnectNode,
testActiveHarnesses,
testJoinBlocks,
testJoinMempools, // Depends on results of testJoinBlocks
testGenerateAndSubmitBlock,
testGenerateAndSubmitBlockWithCustomCoinbaseOutputs,
testMemWalletReorg,
testMemWalletLockedOutputs,
}
var mainHarness *Harness
const (
numMatureOutputs = 25
)
func TestMain(m *testing.M) {
var e error
mainHarness, e = New(&chaincfg.SimNetParams, nil, nil)
if e != nil {
fmt.Println("unable to create main harness: ", e)
os.Exit(1)
}
// Initialize the main mining node with a chain of length 125, providing 25 mature coinbases to allow spending from
// for testing purposes.
if e = mainHarness.SetUp(true, numMatureOutputs); E.Chk(e) {
fmt.Println("unable to setup test chain: ", e)
// Even though the harness was not fully setup, it still needs to be torn down to ensure all resources such as
// temp directories are cleaned up. The error is intentionally ignored since this is already an error path and
// nothing else could be done about it anyways.
_ = mainHarness.TearDown()
os.Exit(1)
}
exitCode := m.Run()
// Clean up any active harnesses that are still currently running.
if len(ActiveHarnesses()) > 0 {
if e := TearDownAll(); E.Chk(e) {
fmt.Println("unable to tear down chain: ", e)
os.Exit(1)
}
}
os.Exit(exitCode)
}
func TestHarness(t *testing.T) {
// We should have (numMatureOutputs * 50 DUO) of mature unspendable
// outputs.
expectedBalance := numMatureOutputs * 50 * amt.SatoshiPerBitcoin
harnessBalance := mainHarness.ConfirmedBalance()
if harnessBalance != expectedBalance {
t.Fatalf(
"expected wallet balance of %v instead have %v",
expectedBalance, harnessBalance,
)
}
// Current tip should be at a height of numMatureOutputs plus the required number of blocks for coinbase maturity.
nodeInfo, e := mainHarness.Node.GetInfo()
if e != nil {
t.Fatalf("unable to execute getinfo on node: %v", e)
}
expectedChainHeight := numMatureOutputs + uint32(mainHarness.ActiveNet.CoinbaseMaturity)
if uint32(nodeInfo.Blocks) != expectedChainHeight {
t.Errorf(
"Chain height is %v, should be %v",
nodeInfo.Blocks, expectedChainHeight,
)
}
for _, testCase := range harnessTestCases {
testCase(mainHarness, t)
}
testTearDownAll(t)
}

View File

@@ -0,0 +1,141 @@
package rpctest
import (
"reflect"
"time"
"github.com/p9c/p9/pkg/chainhash"
"github.com/p9c/p9/pkg/rpcclient"
)
// JoinType is an enum representing a particular type of "node join". A node
// join is a synchronization tool used to wait until a subset of nodes have a
// consistent state with respect to an attribute.
type JoinType uint8
const (
// Blocks is a JoinType which waits until all nodes share the same block
// height.
Blocks JoinType = iota
// Mempools is a JoinType which blocks until all nodes have identical mempool.
Mempools
)
// JoinNodes is a synchronization tool used to block until all passed nodes are
// fully synced with respect to an attribute. This function will block for a
// period of time, finally returning once all nodes are synced according to the
// passed JoinType. This function be used to to ensure all active test harnesses
// are at a consistent state before proceeding to an assertion or check within
// rpc tests.
func JoinNodes(nodes []*Harness, joinType JoinType) (e error) {
switch joinType {
case Blocks:
return syncBlocks(nodes)
case Mempools:
return syncMempools(nodes)
}
return nil
}
// syncMempools blocks until all nodes have identical mempools.
func syncMempools(nodes []*Harness) (e error) {
poolsMatch := false
retry:
for !poolsMatch {
firstPool, e := nodes[0].Node.GetRawMempool()
if e != nil {
return e
}
// If all nodes have an identical mempool with respect to the first node,
// then we're done. Otherwise drop back to the top of the loop and retry
// after a short wait period.
for _, node := range nodes[1:] {
nodePool, e := node.Node.GetRawMempool()
if e != nil {
return e
}
if !reflect.DeepEqual(firstPool, nodePool) {
time.Sleep(time.Millisecond * 100)
continue retry
}
}
poolsMatch = true
}
return nil
}
// syncBlocks blocks until all nodes report the same best chain.
func syncBlocks(nodes []*Harness) (e error) {
blocksMatch := false
retry:
for !blocksMatch {
var prevHash *chainhash.Hash
var prevHeight int32
for _, node := range nodes {
blockHash, blockHeight, e := node.Node.GetBestBlock()
if e != nil {
return e
}
if prevHash != nil && (*blockHash != *prevHash ||
blockHeight != prevHeight) {
time.Sleep(time.Millisecond * 100)
continue retry
}
prevHash, prevHeight = blockHash, blockHeight
}
blocksMatch = true
}
return nil
}
// ConnectNode establishes a new peer-to-peer connection between the "from" harness and the "to" harness. The connection
// made is flagged as persistent therefore in the case of disconnects, "from" will attempt to reestablish a connection
// to the "to" harness.
func ConnectNode(from *Harness, to *Harness) (e error) {
peerInfo, e := from.Node.GetPeerInfo()
if e != nil {
return e
}
numPeers := len(peerInfo)
targetAddr := to.node.config.listen
if e = from.Node.AddNode(targetAddr, rpcclient.ANAdd); E.Chk(e) {
return e
}
// Block until a new connection has been established.
peerInfo, e = from.Node.GetPeerInfo()
if e != nil {
return e
}
for len(peerInfo) <= numPeers {
peerInfo, e = from.Node.GetPeerInfo()
if e != nil {
return e
}
}
return nil
}
// TearDownAll tears down all active test harnesses.
func TearDownAll() (e error) {
harnessStateMtx.Lock()
defer harnessStateMtx.Unlock()
for _, harness := range testInstances {
if e := harness.tearDown(); E.Chk(e) {
return e
}
}
return nil
}
// ActiveHarnesses returns a slice of all currently active test harnesses. A test harness if considered "active" if it
// has been created, but not yet torn down.
func ActiveHarnesses() []*Harness {
harnessStateMtx.RLock()
defer harnessStateMtx.RUnlock()
activeNodes := make([]*Harness, 0, len(testInstances))
for _, harness := range testInstances {
activeNodes = append(activeNodes, harness)
}
return activeNodes
}

9
cmd/node/log.go Normal file
View File

@@ -0,0 +1,9 @@
package node
import (
"github.com/p9c/p9/pkg/log"
"github.com/p9c/p9/version"
)
var subsystem = log.AddLoggerSubsystem(version.PathBase)
var F, E, W, I, D, T log.LevelPrinter = log.GetLogPrinterSet(subsystem)

View File

@@ -0,0 +1,295 @@
package node
import (
"fmt"
"os"
"path/filepath"
"time"
"github.com/p9c/p9/pod/podconfig"
"github.com/btcsuite/winsvc/eventlog"
"github.com/btcsuite/winsvc/mgr"
"github.com/btcsuite/winsvc/svc"
)
const (
// svcName is the name of pod service.
svcName = "podsvc"
// svcDisplayName is the service name that will be shown in the windows
// services list. Not the svcName is the "real" name which is used to
// control the service. This is only for display purposes.
svcDisplayName = "Pod Service"
// svcDesc is the description of the service.
svcDesc = "Downloads and stays synchronized with the bitcoin block " +
"chain and provides chain services to applications."
)
// elog is used to send messages to the Windows event log.
var elog *eventlog.Log
// logServiceStartOfDay logs information about pod when the main server has
// been started to the Windows event log.
func logServiceStartOfDay(srvr *server) {
var message string
message += fmt.Sprintf("Version %s\n", version())
message += fmt.Sprintf("Configuration directory: %s\n", defaultHomeDir)
message += fmt.Sprintf("Configuration file: %s\n", cfg.ConfigFile)
message += fmt.Sprintf("Data directory: %s\n", cfg.DataDir)
eL.inf.Ln(1, message)
}
// podService houses the main service handler which handles all service
// updates and launching podMain.
type podService struct{}
// Execute is the main entry point the winsvc package calls when receiving
// information from the Windows service control manager.
// It launches the long-running podMain (which is the real meat of pod),
// handles service change requests,
// and notifies the service control manager of changes.
func (s *podService) Execute(args []string, r <-chan svc.ChangeRequest, changes chan<- svc.Status) (bool, uint32) {
// Service start is pending.
const cmdsAccepted = svc.AcceptStop | svc.AcceptShutdown
changes <- svc.Status{State: svc.StartPending}
// Start podMain in a separate goroutine so the service can start
// quickly. Shutdown (along with a potential error) is reported via
// doneChan. serverChan is notified with the main server instance once
// it is started so it can be gracefully stopped.
doneChan := make(chan error)
serverChan := make(chan *server)
go func() {
e := podMain(serverChan)
doneChan <- err
}()
// Service is now started.
changes <- svc.Status{State: svc.Running, Accepts: cmdsAccepted}
var mainServer *server
loop:
for {
select {
case c := <-r:
switch c.Cmd {
case svc.Interrogate:
changes <- c.CurrentStatus
case svc.Stop, svc.Shutdown:
// Service stop is pending.
// Don't accept any more commands while pending.
changes <- svc.Status{State: svc.StopPending}
// Signal the main function to exit.
shutdownRequestChannel <- struct{}{}
default:
eL.Error(1, fmt.Sprintf(
"Unexpected control request #%d.", c,
),
)
}
case srvr := <-serverChan:
mainServer = srvr
logServiceStartOfDay(mainServer)
case e := <-doneChan:
if e != nil {
L.eL.Error(1, err.Error())
}
break loop
}
}
// Service is now stopped.
changes <- svc.Status{State: svc.Stopped}
return false, 0
}
// installService attempts to install the pod service.
// Typically this should be done by the msi installer,
// but it is provided here since it can be useful for development.
func installService() (e error) {
// Get the path of the current executable. This is needed because os.
// Args[0] can vary depending on how the application was launched.
// For example, under cmd.exe it will only be the name of the app without
// the path or extension but under mingw it will be the full path
// including the extension.
exePath, e := filepath.Abs(os.Args[0])
if e != nil {
L.
return err
}
if filepath.Ext(exePath) == "" {
exePath += ".exe"
}
// Connect to the windows service manager.
serviceManager, e := mgr.Connect()
if e != nil {
L.
return err
}
defer serviceManager.Disconnect()
// Ensure the service doesn't already exist.
service, e := serviceManager.OpenService(svcName)
if e == nil {
service.Close()
return fmt.Errorf("service %s already exists", svcName)
}
// Install the service.
service, e = serviceManager.CreateService(svcName, exePath, mgr.Config{
DisplayName: svcDisplayName,
Description: svcDesc,
},
)
if e != nil {
L.
return err
}
defer service.Close()
// Support events to the event log using the standard "standard" Windows
// EventCreate.exe message file.
// This allows easy logging of custom messges instead of needing to
// create our own message catalog.
eventlog.Remove(svcName)
eventsSupported := uint32(eventL.Error | eventlog.Warning | eventlog.Info)
return eventlog.InstallAsEventCreate(svcName, eventsSupported)
}
// removeService attempts to uninstall the pod service.
// Typically this should be done by the msi uninstaller,
// but it is provided here since it can be useful for development.
// Not the eventlog entry is intentionally not removed since it would
// invalidate any existing event log messages.
func removeService() (e error) {
// Connect to the windows service manager.
serviceManager, e := mgr.Connect()
if e != nil {
L.
return err
}
defer serviceManager.Disconnect()
// Ensure the service exists.
service, e := serviceManager.OpenService(svcName)
if e != nil {
L.
return fmt.Errorf("service %s is not installed", svcName)
}
defer service.Close()
// Remove the service.
return service.Delete()
}
// startService attempts to start the pod service.
func startService() (e error) {
// Connect to the windows service manager.
serviceManager, e := mgr.Connect()
if e != nil {
L.
return err
}
defer serviceManager.Disconnect()
service, e := serviceManager.OpenService(svcName)
if e != nil {
L.
return fmt.Errorf("could not access service: %v", err)
}
defer service.Close()
e = service.Start(os.Args)
if e != nil {
L.
return fmt.Errorf("could not start service: %v", err)
}
return nil
}
// controlService allows commands which change the status of the service.
// It also waits for up to 10 seconds for the service to change to the passed
// state.
func controlService(c svc.Cmd, to svc.State) (e error) {
// Connect to the windows service manager.
serviceManager, e := mgr.Connect()
if e != nil {
L.
return err
}
defer serviceManager.Disconnect()
service, e := serviceManager.OpenService(svcName)
if e != nil {
L.
return fmt.Errorf("could not access service: %v", err)
}
defer service.Close()
status, e := service.Control(c)
if e != nil {
L.
return fmt.Errorf("could not send control=%d: %v", c, err)
}
// Send the control message.
timeout := time.Now().Add(10 * time.Second)
for status.State != to {
if timeout.Before(time.Now()) {
return fmt.Errorf("timeout waiting for service to go "+
"to state=%d", to,
)
}
time.Sleep(300 * time.Millisecond)
status, e = service.Query()
if e != nil {
L.
return fmt.Errorf("could not retrieve service "+
"status: %v", err,
)
}
}
return nil
}
// performServiceCommand attempts to run one of the supported service
// commands provided on the command line via the service command flag.
// An appropriate error is returned if an invalid command is specified.
func performServiceCommand(command string) (e error) {
var e error
switch command {
case "install":
e = installService()
case "remove":
e = removeService()
case "start":
e = startService()
case "stop":
e = controlService(svc.Stop, svc.Stopped)
default:
e = fmt.Errorf("invalid service command [%s]", command)
}
return err
}
// serviceMain checks whether we're being invoked as a service,
// and if so uses the service control manager to start the long-running
// server.
// A flag is returned to the caller so the application can determine whether
// to exit (when running as a service) or launch in normal interactive mode.
func serviceMain() (bool, error) {
// Don't run as a service if we're running interactively
// (or that can't be determined due to an error).
isInteractive, e := svc.IsAnInteractiveSession()
if e != nil {
L.
return false, err
}
if isInteractive {
return false, nil
}
elog, e = eventlog.Open(svcName)
if e != nil {
L.
return false, err
}
defer elog.Close()
e = svc.Run(svcName, &podService{})
if e != nil {
L.eL.Error(1, fmt.Sprintf("Service start failed: %v", err))
return true, err
}
return true, nil
}
// Set windows specific functions to real functions.
func init() {
podconfig.runServiceCommand = performServiceCommand
winServiceMain = serviceMain
}

54
cmd/node/node/_signal.go_ Executable file
View File

@@ -0,0 +1,54 @@
package node
import (
"os"
"os/signal"
"runtime/trace"
"github.com/p9c/p9/pkg/util/cl"
)
// shutdownRequestChannel is used to initiate shutdown from one of the subsystems using the same code paths as when an interrupt signal is received.
var shutdownRequestChannel = make(qu.C)
// interruptSignals defines the default signals to catch in order to do a proper shutdown. This may be modified during init depending on the platform.
var interruptSignals = []os.Signal{os.Interrupt}
// interruptListener listens for OS Signals such as SIGINT (Ctrl+C) and shutdown requests from shutdownRequestChannel. It returns a channel that is closed when either signal is received.
func interruptListener() <- qu.C {
c := make(qu.C)
go func() {
interruptChannel := make(chan os.Signal, 1)
signal.Notify(interruptChannel, interruptSignals...)
// Listen for initial shutdown signal and close the returned channel to notify the caller.
select {
case sig := <-interruptChannel:
log <- cl.Infof{"received signal (%s) - shutting down...", sig}
trace.Stop()
case <-shutdownRequestChannel:
log <- cl.Inf("shutdown requested - shutting down...")
}
close(c)
// Listen for repeated signals and display a message so the user knows the shutdown is in progress and the process is not hung.
for {
select {
case sig := <-interruptChannel:
log <- cl.Infof{"received signal (%s) - already shutting down...", sig}
case <-shutdownRequestChannel:
log <- cl.Inf("shutdown requested - already shutting down...")
}
}
}()
return c
}
// interruptRequested returns true when the channel returned by interruptListener was closed. This simplifies early shutdown slightly since the caller can just use an if statement instead of a select.
func interruptRequested(interrupted <- qu.C) bool {
select {
case <-interrupted:
return true
default:
}
return false
}

View File

@@ -0,0 +1,13 @@
// +build darwin dragonfly freebsd linux netbsd openbsd solaris
package node
import (
"os"
"syscall"
)
func init() {
interruptSignals = []os.Signal{os.Interrupt, syscall.SIGTERM}
}

519
cmd/node/noded.go Normal file
View File

@@ -0,0 +1,519 @@
/*Package node is a full-node Parallelcoin implementation written in Go.
The default options are sane for most users. This means pod will work 'out of the box' for most users. However, there
are also a wide variety of flags that can be used to control it.
The following section provides a usage overview which enumerates the flags. An interesting point to note is that the
long form of all of these options ( except -C/--configfile and -D --datadir) can be specified in a configuration file
that is automatically parsed when pod starts up. By default, the configuration file is located at ~/.pod/pod. conf on
POSIX-style operating systems and %LOCALAPPDATA%\pod\pod. conf on Windows. The -D (--datadir) flag, can be used to
override this location.
NAME:
pod node - start parallelcoin full node
USAGE:
pod node [global options] command [command options] [arguments...]
VERSION:
v0.0.1
COMMANDS:
dropaddrindex drop the address search index
droptxindex drop the address search index
dropcfindex drop the address search index
GLOBAL OPTIONS:
--help, -h show help
*/
package node
import (
"io"
"net"
"net/http"
"os"
"path/filepath"
"runtime/pprof"
"github.com/p9c/p9/pkg/qu"
"github.com/p9c/p9/pkg/interrupt"
"github.com/p9c/p9/pkg/apputil"
"github.com/p9c/p9/pkg/chainrpc"
"github.com/p9c/p9/pkg/constant"
"github.com/p9c/p9/pkg/ctrl"
"github.com/p9c/p9/pkg/database"
"github.com/p9c/p9/pkg/database/blockdb"
"github.com/p9c/p9/pkg/indexers"
"github.com/p9c/p9/pkg/log"
"github.com/p9c/p9/pod/state"
)
// // This enables pprof
// _ "net/http/pprof"
// winServiceMain is only invoked on Windows. It detects when pod is running as a service and reacts accordingly.
var winServiceMain func() (bool, error)
// NodeMain is the real main function for pod.
//
// The optional serverChan parameter is mainly used by the service code to be notified with the server once it is setup
// so it can gracefully stop it when requested from the service control manager.
func NodeMain(cx *state.State) (e error) {
T.Ln("starting up node main")
// cx.WaitGroup.Add(1)
cx.WaitAdd()
// enable http profiling server if requested
if cx.Config.Profile.V() != "" {
D.Ln("profiling requested")
go func() {
listenAddr := net.JoinHostPort("", cx.Config.Profile.V())
I.Ln("profile server listening on", listenAddr)
profileRedirect := http.RedirectHandler("/debug/pprof", http.StatusSeeOther)
http.Handle("/", profileRedirect)
D.Ln("profile server", http.ListenAndServe(listenAddr, nil))
}()
}
// write cpu profile if requested
if cx.Config.CPUProfile.V() != "" && os.Getenv("POD_TRACE") != "on" {
D.Ln("cpu profiling enabled")
var f *os.File
f, e = os.Create(cx.Config.CPUProfile.V())
if e != nil {
E.Ln("unable to create cpu profile:", e)
return
}
e = pprof.StartCPUProfile(f)
if e != nil {
D.Ln("failed to start up cpu profiler:", e)
} else {
defer func() {
if e = f.Close(); E.Chk(e) {
}
}()
defer pprof.StopCPUProfile()
interrupt.AddHandler(
func() {
D.Ln("stopping CPU profiler")
e = f.Close()
if e != nil {
}
pprof.StopCPUProfile()
D.Ln("finished cpu profiling", *cx.Config.CPUProfile)
},
)
}
}
// perform upgrades to pod as new versions require it
if e = doUpgrades(cx); E.Chk(e) {
return
}
// return now if an interrupt signal was triggered
if interrupt.Requested() {
return nil
}
// load the block database
var db database.DB
db, e = loadBlockDB(cx)
if e != nil {
return
}
closeDb := func() {
// ensure the database is synced and closed on shutdown
T.Ln("gracefully shutting down the database")
func() {
if e = db.Close(); E.Chk(e) {
}
}()
}
defer closeDb()
interrupt.AddHandler(closeDb)
// return now if an interrupt signal was triggered
if interrupt.Requested() {
return nil
}
// drop indexes and exit if requested.
//
// NOTE: The order is important here because dropping the tx index also drops the address index since it relies on
// it
if cx.StateCfg.DropAddrIndex {
W.Ln("dropping address index")
if e = indexers.DropAddrIndex(db, interrupt.ShutdownRequestChan); E.Chk(e) {
return
}
}
if cx.StateCfg.DropTxIndex {
W.Ln("dropping transaction index")
if e = indexers.DropTxIndex(db, interrupt.ShutdownRequestChan); E.Chk(e) {
return
}
}
if cx.StateCfg.DropCfIndex {
W.Ln("dropping cfilter index")
if e = indexers.DropCfIndex(db, interrupt.ShutdownRequestChan); E.Chk(e) {
return
}
}
// return now if an interrupt signal was triggered
if interrupt.Requested() {
return nil
}
mempoolUpdateChan := qu.Ts(1)
mempoolUpdateHook := func() {
mempoolUpdateChan.Signal()
}
// create server and start it
var server *chainrpc.Node
server, e = chainrpc.NewNode(
cx.Config.P2PListeners.S(),
db,
interrupt.ShutdownRequestChan,
state.GetContext(cx),
mempoolUpdateHook,
)
if e != nil {
E.F("unable to start server on %v: %v", cx.Config.P2PListeners.S(), e)
return e
}
server.Start()
cx.RealNode = server
// if len(server.RPCServers) > 0 && *cx.Config.CAPI {
// D.Ln("starting cAPI.....")
// // chainrpc.RunAPI(server.RPCServers[0], cx.NodeKill)
// // D.Ln("propagating rpc server handle (node has started)")
// }
// I.S(server.RPCServers)
if len(server.RPCServers) > 0 {
cx.RPCServer = server.RPCServers[0]
D.Ln("sending back node")
cx.NodeChan <- cx.RPCServer
}
D.Ln("starting controller")
cx.Controller, e = ctrl.New(
cx.Syncing,
cx.Config,
cx.StateCfg,
cx.RealNode,
cx.RPCServer.Cfg.ConnMgr,
mempoolUpdateChan,
uint64(cx.Config.UUID.V()),
cx.KillAll,
cx.RealNode.StartController, cx.RealNode.StopController,
)
go cx.Controller.Run()
cx.Controller.Start()
D.Ln("controller started")
once := true
gracefulShutdown := func() {
if !once {
return
}
if once {
once = false
}
D.Ln("gracefully shutting down the server...")
D.Ln("stopping controller")
cx.Controller.Shutdown()
D.Ln("stopping server")
e := server.Stop()
if e != nil {
W.Ln("failed to stop server", e)
}
server.WaitForShutdown()
I.Ln("server shutdown complete")
log.LogChanDisabled.Store(true)
cx.WaitDone()
cx.KillAll.Q()
cx.NodeKill.Q()
}
D.Ln("adding interrupt handler for node")
interrupt.AddHandler(gracefulShutdown)
// Wait until the interrupt signal is received from an OS signal or shutdown is requested through one of the
// subsystems such as the RPC server.
select {
case <-cx.NodeKill.Wait():
D.Ln("NodeKill")
if !interrupt.Requested() {
interrupt.Request()
}
break
case <-cx.KillAll.Wait():
D.Ln("KillAll")
if !interrupt.Requested() {
interrupt.Request()
}
break
}
gracefulShutdown()
return nil
}
// loadBlockDB loads (or creates when needed) the block database taking into account the selected database backend and
// returns a handle to it. It also additional logic such warning the user if there are multiple databases which consume
// space on the file system and ensuring the regression test database is clean when in regression test mode.
func loadBlockDB(cx *state.State) (db database.DB, e error) {
// The memdb backend does not have a file path associated with it, so handle it uniquely. We also don't want to
// worry about the multiple database type warnings when running with the memory database.
if cx.Config.DbType.V() == "memdb" {
I.Ln("creating block database in memory")
if db, e = database.Create(cx.Config.DbType.V()); state.E.Chk(e) {
return nil, e
}
return db, nil
}
warnMultipleDBs(cx)
// The database name is based on the database type.
dbPath := state.BlockDb(cx, cx.Config.DbType.V(), blockdb.NamePrefix)
// The regression test is special in that it needs a clean database for each
// run, so remove it now if it already exists.
e = removeRegressionDB(cx, dbPath)
if e != nil {
D.Ln("failed to remove regression db:", e)
}
I.F("loading block database from '%s'", dbPath)
I.Ln(database.SupportedDrivers())
if db, e = database.Open(cx.Config.DbType.V(), dbPath, cx.ActiveNet.Net); E.Chk(e) {
T.Ln(e) // return the error if it's not because the database doesn't exist
if dbErr, ok := e.(database.DBError); !ok || dbErr.ErrorCode !=
database.ErrDbDoesNotExist {
return nil, e
}
// create the db if it does not exist
e = os.MkdirAll(cx.Config.DataDir.V(), 0700)
if e != nil {
return nil, e
}
db, e = database.Create(cx.Config.DbType.V(), dbPath, cx.ActiveNet.Net)
if e != nil {
return nil, e
}
}
T.Ln("block database loaded")
return db, nil
}
// removeRegressionDB removes the existing regression test database if running
// in regression test mode and it already exists.
func removeRegressionDB(cx *state.State, dbPath string) (e error) {
// don't do anything if not in regression test mode
if !((cx.Config.Network.V())[0] == 'r') {
return nil
}
// remove the old regression test database if it already exists
fi, e := os.Stat(dbPath)
if e == nil {
I.F("removing regression test database from '%s' %s", dbPath)
if fi.IsDir() {
if e = os.RemoveAll(dbPath); E.Chk(e) {
return e
}
} else {
if e = os.Remove(dbPath); E.Chk(e) {
return e
}
}
}
return nil
}
// warnMultipleDBs shows a warning if multiple block database types are
// detected. This is not a situation most users want. It is handy for
// development however to support multiple side-by-side databases.
func warnMultipleDBs(cx *state.State) {
// This is intentionally not using the known db types which depend on the
// database types compiled into the binary since we want to detect legacy db
// types as well.
dbTypes := []string{"ffldb", "leveldb", "sqlite"}
duplicateDbPaths := make([]string, 0, len(dbTypes)-1)
for _, dbType := range dbTypes {
if dbType == cx.Config.DbType.V() {
continue
}
// store db path as a duplicate db if it exists
dbPath := state.BlockDb(cx, dbType, blockdb.NamePrefix)
if apputil.FileExists(dbPath) {
duplicateDbPaths = append(duplicateDbPaths, dbPath)
}
}
// warn if there are extra databases
if len(duplicateDbPaths) > 0 {
selectedDbPath := state.BlockDb(cx, cx.Config.DbType.V(), blockdb.NamePrefix)
W.F(
"\nThere are multiple block chain databases using different"+
" database types.\nYou probably don't want to waste disk"+
" space by having more than one."+
"\nYour current database is located at [%v]."+
"\nThe additional database is located at %v",
selectedDbPath,
duplicateDbPaths,
)
}
}
// dirEmpty returns whether or not the specified directory path is empty
func dirEmpty(dirPath string) (bool, error) {
f, e := os.Open(dirPath)
if e != nil {
return false, e
}
defer func() {
if e = f.Close(); E.Chk(e) {
}
}()
// Read the names of a max of one entry from the directory. When the directory is empty, an io.EOF error will be
// returned, so allow it.
names, e := f.Readdirnames(1)
if e != nil && e != io.EOF {
return false, e
}
return len(names) == 0, nil
}
// doUpgrades performs upgrades to pod as new versions require it
func doUpgrades(cx *state.State) (e error) {
e = upgradeDBPaths(cx)
if e != nil {
return e
}
return upgradeDataPaths()
}
// oldPodHomeDir returns the OS specific home directory pod used prior to version 0.3.3. This has since been replaced
// with util.AppDataDir but this function is still provided for the automatic upgrade path.
func oldPodHomeDir() string {
// Search for Windows APPDATA first. This won't exist on POSIX OSes
appData := os.Getenv("APPDATA")
if appData != "" {
return filepath.Join(appData, "pod")
}
// Fall back to standard HOME directory that works for most POSIX OSes
home := os.Getenv("HOME")
if home != "" {
return filepath.Join(home, ".pod")
}
// In the worst case, use the current directory
return "."
}
// upgradeDBPathNet moves the database for a specific network from its location prior to pod version 0.2.0 and uses
// heuristics to ascertain the old database type to rename to the new format.
func upgradeDBPathNet(cx *state.State, oldDbPath, netName string) (e error) {
// Prior to version 0.2.0, the database was named the same thing for both sqlite and leveldb. Use heuristics to
// figure out the type of the database and move it to the new path and name introduced with version 0.2.0
// accordingly.
fi, e := os.Stat(oldDbPath)
if e == nil {
oldDbType := "sqlite"
if fi.IsDir() {
oldDbType = "leveldb"
}
// The new database name is based on the database type and resides in a directory named after the network type.
newDbRoot := filepath.Join(filepath.Dir(cx.Config.DataDir.V()), netName)
newDbName := blockdb.NamePrefix + "_" + oldDbType
if oldDbType == "sqlite" {
newDbName = newDbName + ".db"
}
newDbPath := filepath.Join(newDbRoot, newDbName)
// Create the new path if needed
//
e = os.MkdirAll(newDbRoot, 0700)
if e != nil {
return e
}
// Move and rename the old database
//
e := os.Rename(oldDbPath, newDbPath)
if e != nil {
return e
}
}
return nil
}
// upgradeDBPaths moves the databases from their locations prior to pod version 0.2.0 to their new locations
func upgradeDBPaths(cx *state.State) (e error) {
// Prior to version 0.2.0 the databases were in the "db" directory and their names were suffixed by "testnet" and
// "regtest" for their respective networks. Chk for the old database and update it to the new path introduced with
// version 0.2.0 accordingly.
oldDbRoot := filepath.Join(oldPodHomeDir(), "db")
e = upgradeDBPathNet(cx, filepath.Join(oldDbRoot, "pod.db"), "mainnet")
if e != nil {
D.Ln(e)
}
e = upgradeDBPathNet(
cx, filepath.Join(oldDbRoot, "pod_testnet.db"),
"testnet",
)
if e != nil {
D.Ln(e)
}
e = upgradeDBPathNet(
cx, filepath.Join(oldDbRoot, "pod_regtest.db"),
"regtest",
)
if e != nil {
D.Ln(e)
}
// Remove the old db directory
return os.RemoveAll(oldDbRoot)
}
// upgradeDataPaths moves the application data from its location prior to pod version 0.3.3 to its new location.
func upgradeDataPaths() (e error) {
// No need to migrate if the old and new home paths are the same.
oldHomePath := oldPodHomeDir()
newHomePath := constant.DefaultHomeDir
if oldHomePath == newHomePath {
return nil
}
// Only migrate if the old path exists and the new one doesn't
if apputil.FileExists(oldHomePath) && !apputil.FileExists(newHomePath) {
// Create the new path
I.F(
"migrating application home path from '%s' to '%s'",
oldHomePath, newHomePath,
)
e := os.MkdirAll(newHomePath, 0700)
if e != nil {
return e
}
// Move old pod.conf into new location if needed
oldConfPath := filepath.Join(oldHomePath, constant.DefaultConfigFilename)
newConfPath := filepath.Join(newHomePath, constant.DefaultConfigFilename)
if apputil.FileExists(oldConfPath) && !apputil.FileExists(newConfPath) {
e = os.Rename(oldConfPath, newConfPath)
if e != nil {
return e
}
}
// Move old data directory into new location if needed
oldDataPath := filepath.Join(oldHomePath, constant.DefaultDataDirname)
newDataPath := filepath.Join(newHomePath, constant.DefaultDataDirname)
if apputil.FileExists(oldDataPath) && !apputil.FileExists(newDataPath) {
e = os.Rename(oldDataPath, newDataPath)
if e != nil {
return e
}
}
// Remove the old home if it is empty or show a warning if not
ohpEmpty, e := dirEmpty(oldHomePath)
if e != nil {
return e
}
if ohpEmpty {
e := os.Remove(oldHomePath)
if e != nil {
return e
}
} else {
W.F(
"not removing '%s' since it contains files not created by"+
" this application you may want to manually move them or"+
" delete them.", oldHomePath,
)
}
}
return nil
}

360
cmd/node/parameters/genesisblocks Executable file
View File

@@ -0,0 +1,360 @@
Parallelcoin mainnet raw genesis block
000009f0fcbad3aac904d3660cfdcf238bf298cfe73adf1d39d14fc5c740ccc7
020000000000000000000000000000000000000000000000000000000000000000000000b79a9b6f31a9d7d25a1c4b0ec7a671dc56ce7663c380f2d2513a8e65e4ea43c8dcecc953ffff0f1e810201000101000000010000000000000000000000000000000000000000000000000000000000000000ffffffff3a04ffff001d0104324e5954696d657320323031342d30372d3139202d2044656c6c20426567696e7320416363657074696e6720426974636f696effffffff0100e8764817000000434104e0d27172510c6806889740edafe6e63eb23fca32786fccfdb282bb2876a9f43b228245df057661ff943f6150716a20ea1851e8a7e9f54e620297664618438daeac00000000
testnet raw genesis block
00000e41ecbaa35ef91b0c2c22ed4d85fa12bbc87da2668fe17572695fb30cdf
020000000000000000000000000000000000000000000000000000000000000000000000b79a9b6f31a9d7d25a1c4b0ec7a671dc56ce7663c380f2d2513a8e65e4ea43c884eac953ffff0f1e18df1a000101000000010000000000000000000000000000000000000000000000000000000000000000ffffffff3a04ffff001d0104324e5954696d657320323031342d30372d3139202d2044656c6c20426567696e7320416363657074696e6720426974636f696effffffff0100e8764817000000434104e0d27172510c6806889740edafe6e63eb23fca32786fccfdb282bb2876a9f43b228245df057661ff943f6150716a20ea1851e8a7e9f54e620297664618438daeac00000000
regtestnet raw genesis block
69e9b79e220ea183dc2a52c825667e486bba65e2f64d237b578559ab60379181
020000000000000000000000000000000000000000000000000000000000000000000000b79a9b6f31a9d7d25a1c4b0ec7a671dc56ce7663c380f2d2513a8e65e4ea43c8d4e5c953ffff7f20010000000101000000010000000000000000000000000000000000000000000000000000000000000000ffffffff3a04ffff001d0104324e5954696d657320323031342d30372d3139202d2044656c6c20426567696e7320416363657074696e6720426974636f696effffffff0100e8764817000000434104e0d27172510c6806889740edafe6e63eb23fca32786fccfdb282bb2876a9f43b228245df057661ff943f6150716a20ea1851e8a7e9f54e620297664618438daeac00000000
-----------------------------------------------------------------------------------
mainnet genesis block data
BLOCK 0
0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0xb7, 0x9a, 0x9b, 0x6f,
0x31, 0xa9, 0xd7, 0xd2, 0x5a, 0x1c, 0x4b, 0x0e,
0xc7, 0xa6, 0x71, 0xdc, 0x56, 0xce, 0x76, 0x63,
0xc3, 0x80, 0xf2, 0xd2, 0x51, 0x3a, 0x8e, 0x65,
0xe4, 0xea, 0x43, 0xc8, 0xdc, 0xec, 0xc9, 0x53,
0xff, 0xff, 0x0f, 0x1e, 0x81, 0x02, 0x01, 0x00,
0x01, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff,
0xff, 0xff, 0x3a, 0x04, 0xff, 0xff, 0x00, 0x1d,
0x01, 0x04, 0x32, 0x4e, 0x59, 0x54, 0x69, 0x6d,
0x65, 0x73, 0x20, 0x32, 0x30, 0x31, 0x34, 0x2d,
0x30, 0x37, 0x2d, 0x31, 0x39, 0x20, 0x2d, 0x20,
0x44, 0x65, 0x6c, 0x6c, 0x20, 0x42, 0x65, 0x67,
0x69, 0x6e, 0x73, 0x20, 0x41, 0x63, 0x63, 0x65,
0x70, 0x74, 0x69, 0x6e, 0x67, 0x20, 0x42, 0x69,
0x74, 0x63, 0x6f, 0x69, 0x6e, 0xff, 0xff, 0xff,
0xff, 0x01, 0x00, 0xe8, 0x76, 0x48, 0x17, 0x00,
0x00, 0x00, 0x43, 0x41, 0x04, 0xe0, 0xd2, 0x71,
0x72, 0x51, 0x0c, 0x68, 0x06, 0x88, 0x97, 0x40,
0xed, 0xaf, 0xe6, 0xe6, 0x3e, 0xb2, 0x3f, 0xca,
0x32, 0x78, 0x6f, 0xcc, 0xfd, 0xb2, 0x82, 0xbb,
0x28, 0x76, 0xa9, 0xf4, 0x3b, 0x22, 0x82, 0x45,
0xdf, 0x05, 0x76, 0x61, 0xff, 0x94, 0x3f, 0x61,
0x50, 0x71, 0x6a, 0x20, 0xea, 0x18, 0x51, 0xe8,
0xa7, 0xe9, 0xf5, 0x4e, 0x62, 0x02, 0x97, 0x66,
0x46, 0x18, 0x43, 0x8d, 0xae, 0xac, 0x00, 0x00,
0x00, 0x00,
GENESIS BLOCK HASH
for
0x00, 0x00, 0x09, 0xf0, 0xfc, 0xba, 0xd3, 0xaa,
0xc9, 0x04, 0xd3, 0x66, 0x0c, 0xfd, 0xcf, 0x23,
0x8b, 0xf2, 0x98, 0xcf, 0xe7, 0x3a, 0xdf, 0x1d,
0x39, 0xd1, 0x4f, 0xc5, 0xc7, 0x40, 0xcc, 0xc7,
rev
0xc7, 0xcc, 0x40, 0xc7, 0xc5, 0x4f, 0xd1, 0x39,
0x1d, 0xdf, 0x3a, 0xe7, 0xcf, 0x98, 0xf2, 0x8b,
0x23, 0xcf, 0xfd, 0x0c, 0x66, 0xd3, 0x04, 0xc9,
0xaa, 0xd3, 0xba, 0xfc, 0xf0, 0x09, 0x00, 0x00,
Version 2
rev
0x02000000
for
0x00000002
HashPrevBlock 0000000000000000000000000000000000000000000000000000000000000000
HashMerkleRoot c843eae4658e3a51d2f280c36376ce56dc71a6c70e4b1c5ad2d7a9316f9b9ab7
rev
0xc8, 0x43, 0xea, 0xe4, 0x65, 0x8e, 0x3a, 0x51,
0xd2, 0xf2, 0x80, 0xc3, 0x63, 0x76, 0xce, 0x56,
0xdc, 0x71, 0xa6, 0xc7, 0x0e, 0x4b, 0x1c, 0x5a,
0xd2, 0xd7, 0xa9, 0x31, 0x6f, 0x9b, 0x9a, 0xb7,
for
0xb7, 0x9a, 0x9b, 0x6f, 0x31, 0xa9, 0xd7, 0xd2,
0x5a, 0x1c, 0x4b, 0x0e, 0xc7, 0xa6, 0x71, 0xdc,
0x56, 0xce, 0x76, 0x63, 0xc3, 0x80, 0xf2, 0xd2,
0x51, 0x3a, 0x8e, 0x65, 0xe4, 0xea, 0x43, 0xc8,
Unix timestamp dcecc953
for 1405742300 0x53c9ecdc
rev 3706505555 0xdcecc953
Bits ffff0f1e
for 504365055 0x1e0fffff
rev 4294905630 0xffff0f1e
Nonce 66177
for 66177 0x10281
rev 2164392192 0x81020100
Transaction 0
tx version 1
for
0x04, 0xff, 0xff, 0x00, 0x1d, 0x01, 0x04, 0x32,
0x4e, 0x59, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x20,
0x32, 0x30, 0x31, 0x34, 0x2d, 0x30, 0x37, 0x2d,
0x31, 0x39, 0x20, 0x2d, 0x20, 0x44, 0x65, 0x6c,
0x6c, 0x20, 0x42, 0x65, 0x67, 0x69, 0x6e, 0x73,
0x20, 0x41, 0x63, 0x63, 0x65, 0x70, 0x74, 0x69,
0x6e, 0x67, 0x20, 0x42, 0x69, 0x74, 0x63, 0x6f,
0x69, 0x6e,
rev
0x6e, 0x69, 0x6f, 0x63, 0x74, 0x69, 0x42, 0x20,
0x67, 0x6e, 0x69, 0x74, 0x70, 0x65, 0x63, 0x63,
0x41, 0x20, 0x73, 0x6e, 0x69, 0x67, 0x65, 0x42,
0x20, 0x6c, 0x6c, 0x65, 0x44, 0x20, 0x2d, 0x20,
0x39, 0x31, 0x2d, 0x37, 0x30, 0x2d, 0x34, 0x31,
0x30, 0x32, 0x20, 0x73, 0x65, 0x6d, 0x69, 0x54,
0x59, 0x4e, 0x32, 0x04, 0x01, 0x1d, 0x00, 0xff,
0xff, 0x04,
txout
for
0x41, 0x04, 0xe0, 0xd2, 0x71, 0x72, 0x51, 0x0c,
0x68, 0x06, 0x88, 0x97, 0x40, 0xed, 0xaf, 0xe6,
0xe6, 0x3e, 0xb2, 0x3f, 0xca, 0x32, 0x78, 0x6f,
0xcc, 0xfd, 0xb2, 0x82, 0xbb, 0x28, 0x76, 0xa9,
0xf4, 0x3b, 0x22, 0x82, 0x45, 0xdf, 0x05, 0x76,
0x61, 0xff, 0x94, 0x3f, 0x61, 0x50, 0x71, 0x6a,
0x20, 0xea, 0x18, 0x51, 0xe8, 0xa7, 0xe9, 0xf5,
0x4e, 0x62, 0x02, 0x97, 0x66, 0x46, 0x18, 0x43,
0x8d, 0xae, 0xac,
rev
0xac, 0xae, 0x8d, 0x43, 0x18, 0x46, 0x66, 0x97,
0x02, 0x62, 0x4e, 0xf5, 0xe9, 0xa7, 0xe8, 0x51,
0x18, 0xea, 0x20, 0x6a, 0x71, 0x50, 0x61, 0x3f,
0x94, 0xff, 0x61, 0x76, 0x05, 0xdf, 0x45, 0x82,
0x22, 0x3b, 0xf4, 0xa9, 0x76, 0x28, 0xbb, 0x82,
0xb2, 0xfd, 0xcc, 0x6f, 0x78, 0x32, 0xca, 0x3f,
0xb2, 0x3e, 0xe6, 0xe6, 0xaf, 0xed, 0x40, 0x97,
0x88, 0x06, 0x68, 0x0c, 0x51, 0x72, 0x71, 0xd2,
0xe0, 0x04, 0x41,
[04e0d27172510c6806889740edafe6e63eb23fca32786fccfdb282bb2876a9f43b228245df057661ff943f6150716a20ea1851e8a7e9f54e620297664618438dae OP_CHECKSIG]
abCFzjNoXHYxVQYP1WDBwpxgPCXzfqoxv7 +100000000000
-----------------------------------------------------------------------------------
testnet genesis block data
BLOCK 0
0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0xb7, 0x9a, 0x9b, 0x6f,
0x31, 0xa9, 0xd7, 0xd2, 0x5a, 0x1c, 0x4b, 0x0e,
0xc7, 0xa6, 0x71, 0xdc, 0x56, 0xce, 0x76, 0x63,
0xc3, 0x80, 0xf2, 0xd2, 0x51, 0x3a, 0x8e, 0x65,
0xe4, 0xea, 0x43, 0xc8, 0x84, 0xea, 0xc9, 0x53,
0xff, 0xff, 0x0f, 0x1e, 0x18, 0xdf, 0x1a, 0x00,
0x01, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff,
0xff, 0xff, 0x3a, 0x04, 0xff, 0xff, 0x00, 0x1d,
0x01, 0x04, 0x32, 0x4e, 0x59, 0x54, 0x69, 0x6d,
0x65, 0x73, 0x20, 0x32, 0x30, 0x31, 0x34, 0x2d,
0x30, 0x37, 0x2d, 0x31, 0x39, 0x20, 0x2d, 0x20,
0x44, 0x65, 0x6c, 0x6c, 0x20, 0x42, 0x65, 0x67,
0x69, 0x6e, 0x73, 0x20, 0x41, 0x63, 0x63, 0x65,
0x70, 0x74, 0x69, 0x6e, 0x67, 0x20, 0x42, 0x69,
0x74, 0x63, 0x6f, 0x69, 0x6e, 0xff, 0xff, 0xff,
0xff, 0x01, 0x00, 0xe8, 0x76, 0x48, 0x17, 0x00,
0x00, 0x00, 0x43, 0x41, 0x04, 0xe0, 0xd2, 0x71,
0x72, 0x51, 0x0c, 0x68, 0x06, 0x88, 0x97, 0x40,
0xed, 0xaf, 0xe6, 0xe6, 0x3e, 0xb2, 0x3f, 0xca,
0x32, 0x78, 0x6f, 0xcc, 0xfd, 0xb2, 0x82, 0xbb,
0x28, 0x76, 0xa9, 0xf4, 0x3b, 0x22, 0x82, 0x45,
0xdf, 0x05, 0x76, 0x61, 0xff, 0x94, 0x3f, 0x61,
0x50, 0x71, 0x6a, 0x20, 0xea, 0x18, 0x51, 0xe8,
0xa7, 0xe9, 0xf5, 0x4e, 0x62, 0x02, 0x97, 0x66,
0x46, 0x18, 0x43, 0x8d, 0xae, 0xac, 0x00, 0x00,
0x00, 0x00,
GENESIS BLOCK HASH
for
0x00, 0x00, 0x0e, 0x41, 0xec, 0xba, 0xa3, 0x5e,
0xf9, 0x1b, 0x0c, 0x2c, 0x22, 0xed, 0x4d, 0x85,
0xfa, 0x12, 0xbb, 0xc8, 0x7d, 0xa2, 0x66, 0x8f,
0xe1, 0x75, 0x72, 0x69, 0x5f, 0xb3, 0x0c, 0xdf,
rev
0xdf, 0x0c, 0xb3, 0x5f, 0x69, 0x72, 0x75, 0xe1,
0x8f, 0x66, 0xa2, 0x7d, 0xc8, 0xbb, 0x12, 0xfa,
0x85, 0x4d, 0xed, 0x22, 0x2c, 0x0c, 0x1b, 0xf9,
0x5e, 0xa3, 0xba, 0xec, 0x41, 0x0e, 0x00, 0x00,
Version 2
rev
0x02000000
for
0x00000002
HashPrevBlock 0000000000000000000000000000000000000000000000000000000000000000
HashMerkleRoot c843eae4658e3a51d2f280c36376ce56dc71a6c70e4b1c5ad2d7a9316f9b9ab7
rev
0xc8, 0x43, 0xea, 0xe4, 0x65, 0x8e, 0x3a, 0x51,
0xd2, 0xf2, 0x80, 0xc3, 0x63, 0x76, 0xce, 0x56,
0xdc, 0x71, 0xa6, 0xc7, 0x0e, 0x4b, 0x1c, 0x5a,
0xd2, 0xd7, 0xa9, 0x31, 0x6f, 0x9b, 0x9a, 0xb7,
for
0xb7, 0x9a, 0x9b, 0x6f, 0x31, 0xa9, 0xd7, 0xd2,
0x5a, 0x1c, 0x4b, 0x0e, 0xc7, 0xa6, 0x71, 0xdc,
0x56, 0xce, 0x76, 0x63, 0xc3, 0x80, 0xf2, 0xd2,
0x51, 0x3a, 0x8e, 0x65, 0xe4, 0xea, 0x43, 0xc8,
Unix timestamp 84eac953
for 1405741700 0x53c9ea84
rev 2229979475 0x84eac953
Bits ffff0f1e
for 504365055 0x1e0fffff
rev 4294905630 0xffff0f1e
Nonce 1761048
for 1761048 0x1adf18
rev 417274368 0x18df1a00
Transaction 0
tx version 1
for
0x04, 0xff, 0xff, 0x00, 0x1d, 0x01, 0x04, 0x32,
0x4e, 0x59, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x20,
0x32, 0x30, 0x31, 0x34, 0x2d, 0x30, 0x37, 0x2d,
0x31, 0x39, 0x20, 0x2d, 0x20, 0x44, 0x65, 0x6c,
0x6c, 0x20, 0x42, 0x65, 0x67, 0x69, 0x6e, 0x73,
0x20, 0x41, 0x63, 0x63, 0x65, 0x70, 0x74, 0x69,
0x6e, 0x67, 0x20, 0x42, 0x69, 0x74, 0x63, 0x6f,
0x69, 0x6e,
rev
0x6e, 0x69, 0x6f, 0x63, 0x74, 0x69, 0x42, 0x20,
0x67, 0x6e, 0x69, 0x74, 0x70, 0x65, 0x63, 0x63,
0x41, 0x20, 0x73, 0x6e, 0x69, 0x67, 0x65, 0x42,
0x20, 0x6c, 0x6c, 0x65, 0x44, 0x20, 0x2d, 0x20,
0x39, 0x31, 0x2d, 0x37, 0x30, 0x2d, 0x34, 0x31,
0x30, 0x32, 0x20, 0x73, 0x65, 0x6d, 0x69, 0x54,
0x59, 0x4e, 0x32, 0x04, 0x01, 0x1d, 0x00, 0xff,
0xff, 0x04,
txout
for
0x41, 0x04, 0xe0, 0xd2, 0x71, 0x72, 0x51, 0x0c,
0x68, 0x06, 0x88, 0x97, 0x40, 0xed, 0xaf, 0xe6,
0xe6, 0x3e, 0xb2, 0x3f, 0xca, 0x32, 0x78, 0x6f,
0xcc, 0xfd, 0xb2, 0x82, 0xbb, 0x28, 0x76, 0xa9,
0xf4, 0x3b, 0x22, 0x82, 0x45, 0xdf, 0x05, 0x76,
0x61, 0xff, 0x94, 0x3f, 0x61, 0x50, 0x71, 0x6a,
0x20, 0xea, 0x18, 0x51, 0xe8, 0xa7, 0xe9, 0xf5,
0x4e, 0x62, 0x02, 0x97, 0x66, 0x46, 0x18, 0x43,
0x8d, 0xae, 0xac,
rev
0xac, 0xae, 0x8d, 0x43, 0x18, 0x46, 0x66, 0x97,
0x02, 0x62, 0x4e, 0xf5, 0xe9, 0xa7, 0xe8, 0x51,
0x18, 0xea, 0x20, 0x6a, 0x71, 0x50, 0x61, 0x3f,
0x94, 0xff, 0x61, 0x76, 0x05, 0xdf, 0x45, 0x82,
0x22, 0x3b, 0xf4, 0xa9, 0x76, 0x28, 0xbb, 0x82,
0xb2, 0xfd, 0xcc, 0x6f, 0x78, 0x32, 0xca, 0x3f,
0xb2, 0x3e, 0xe6, 0xe6, 0xaf, 0xed, 0x40, 0x97,
0x88, 0x06, 0x68, 0x0c, 0x51, 0x72, 0x71, 0xd2,
0xe0, 0x04, 0x41,
[04e0d27172510c6806889740edafe6e63eb23fca32786fccfdb282bb2876a9f43b228245df057661ff943f6150716a20ea1851e8a7e9f54e620297664618438dae OP_CHECKSIG]
abCFzjNoXHYxVQYP1WDBwpxgPCXzfqoxv7 +100000000000
-----------------------------------------------------------------------------------
regtestnet genesis block data
BLOCK 0
0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0xb7, 0x9a, 0x9b, 0x6f,
0x31, 0xa9, 0xd7, 0xd2, 0x5a, 0x1c, 0x4b, 0x0e,
0xc7, 0xa6, 0x71, 0xdc, 0x56, 0xce, 0x76, 0x63,
0xc3, 0x80, 0xf2, 0xd2, 0x51, 0x3a, 0x8e, 0x65,
0xe4, 0xea, 0x43, 0xc8, 0xd4, 0xe5, 0xc9, 0x53,
0xff, 0xff, 0x7f, 0x20, 0x01, 0x00, 0x00, 0x00,
0x01, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff,
0xff, 0xff, 0x3a, 0x04, 0xff, 0xff, 0x00, 0x1d,
0x01, 0x04, 0x32, 0x4e, 0x59, 0x54, 0x69, 0x6d,
0x65, 0x73, 0x20, 0x32, 0x30, 0x31, 0x34, 0x2d,
0x30, 0x37, 0x2d, 0x31, 0x39, 0x20, 0x2d, 0x20,
0x44, 0x65, 0x6c, 0x6c, 0x20, 0x42, 0x65, 0x67,
0x69, 0x6e, 0x73, 0x20, 0x41, 0x63, 0x63, 0x65,
0x70, 0x74, 0x69, 0x6e, 0x67, 0x20, 0x42, 0x69,
0x74, 0x63, 0x6f, 0x69, 0x6e, 0xff, 0xff, 0xff,
0xff, 0x01, 0x00, 0xe8, 0x76, 0x48, 0x17, 0x00,
0x00, 0x00, 0x43, 0x41, 0x04, 0xe0, 0xd2, 0x71,
0x72, 0x51, 0x0c, 0x68, 0x06, 0x88, 0x97, 0x40,
0xed, 0xaf, 0xe6, 0xe6, 0x3e, 0xb2, 0x3f, 0xca,
0x32, 0x78, 0x6f, 0xcc, 0xfd, 0xb2, 0x82, 0xbb,
0x28, 0x76, 0xa9, 0xf4, 0x3b, 0x22, 0x82, 0x45,
0xdf, 0x05, 0x76, 0x61, 0xff, 0x94, 0x3f, 0x61,
0x50, 0x71, 0x6a, 0x20, 0xea, 0x18, 0x51, 0xe8,
0xa7, 0xe9, 0xf5, 0x4e, 0x62, 0x02, 0x97, 0x66,
0x46, 0x18, 0x43, 0x8d, 0xae, 0xac, 0x00, 0x00,
0x00, 0x00,
GENESIS BLOCK HASH
for
0x69, 0xe9, 0xb7, 0x9e, 0x22, 0x0e, 0xa1, 0x83,
0xdc, 0x2a, 0x52, 0xc8, 0x25, 0x66, 0x7e, 0x48,
0x6b, 0xba, 0x65, 0xe2, 0xf6, 0x4d, 0x23, 0x7b,
0x57, 0x85, 0x59, 0xab, 0x60, 0x37, 0x91, 0x81,
rev
0x81, 0x91, 0x37, 0x60, 0xab, 0x59, 0x85, 0x57,
0x7b, 0x23, 0x4d, 0xf6, 0xe2, 0x65, 0xba, 0x6b,
0x48, 0x7e, 0x66, 0x25, 0xc8, 0x52, 0x2a, 0xdc,
0x83, 0xa1, 0x0e, 0x22, 0x9e, 0xb7, 0xe9, 0x69,
Version 2
rev
0x02000000
for
0x00000002
HashPrevBlock 0000000000000000000000000000000000000000000000000000000000000000
HashMerkleRoot c843eae4658e3a51d2f280c36376ce56dc71a6c70e4b1c5ad2d7a9316f9b9ab7
rev
0xc8, 0x43, 0xea, 0xe4, 0x65, 0x8e, 0x3a, 0x51,
0xd2, 0xf2, 0x80, 0xc3, 0x63, 0x76, 0xce, 0x56,
0xdc, 0x71, 0xa6, 0xc7, 0x0e, 0x4b, 0x1c, 0x5a,
0xd2, 0xd7, 0xa9, 0x31, 0x6f, 0x9b, 0x9a, 0xb7,
for
0xb7, 0x9a, 0x9b, 0x6f, 0x31, 0xa9, 0xd7, 0xd2,
0x5a, 0x1c, 0x4b, 0x0e, 0xc7, 0xa6, 0x71, 0xdc,
0x56, 0xce, 0x76, 0x63, 0xc3, 0x80, 0xf2, 0xd2,
0x51, 0x3a, 0x8e, 0x65, 0xe4, 0xea, 0x43, 0xc8,
Unix timestamp d4e5c953
for 1405740500 0x53c9e5d4
rev 3571829075 0xd4e5c953
Bits ffff7f20
for 545259519 0x207fffff
rev 4294934304 0xffff7f20
Nonce 1
for 1 0x0001
rev 16777216 0x1000000
Transaction 0
tx version 1
for
0x04, 0xff, 0xff, 0x00, 0x1d, 0x01, 0x04, 0x32,
0x4e, 0x59, 0x54, 0x69, 0x6d, 0x65, 0x73, 0x20,
0x32, 0x30, 0x31, 0x34, 0x2d, 0x30, 0x37, 0x2d,
0x31, 0x39, 0x20, 0x2d, 0x20, 0x44, 0x65, 0x6c,
0x6c, 0x20, 0x42, 0x65, 0x67, 0x69, 0x6e, 0x73,
0x20, 0x41, 0x63, 0x63, 0x65, 0x70, 0x74, 0x69,
0x6e, 0x67, 0x20, 0x42, 0x69, 0x74, 0x63, 0x6f,
0x69, 0x6e,
rev
0x6e, 0x69, 0x6f, 0x63, 0x74, 0x69, 0x42, 0x20,
0x67, 0x6e, 0x69, 0x74, 0x70, 0x65, 0x63, 0x63,
0x41, 0x20, 0x73, 0x6e, 0x69, 0x67, 0x65, 0x42,
0x20, 0x6c, 0x6c, 0x65, 0x44, 0x20, 0x2d, 0x20,
0x39, 0x31, 0x2d, 0x37, 0x30, 0x2d, 0x34, 0x31,
0x30, 0x32, 0x20, 0x73, 0x65, 0x6d, 0x69, 0x54,
0x59, 0x4e, 0x32, 0x04, 0x01, 0x1d, 0x00, 0xff,
0xff, 0x04,
txout
for
0x41, 0x04, 0xe0, 0xd2, 0x71, 0x72, 0x51, 0x0c,
0x68, 0x06, 0x88, 0x97, 0x40, 0xed, 0xaf, 0xe6,
0xe6, 0x3e, 0xb2, 0x3f, 0xca, 0x32, 0x78, 0x6f,
0xcc, 0xfd, 0xb2, 0x82, 0xbb, 0x28, 0x76, 0xa9,
0xf4, 0x3b, 0x22, 0x82, 0x45, 0xdf, 0x05, 0x76,
0x61, 0xff, 0x94, 0x3f, 0x61, 0x50, 0x71, 0x6a,
0x20, 0xea, 0x18, 0x51, 0xe8, 0xa7, 0xe9, 0xf5,
0x4e, 0x62, 0x02, 0x97, 0x66, 0x46, 0x18, 0x43,
0x8d, 0xae, 0xac,
rev
0xac, 0xae, 0x8d, 0x43, 0x18, 0x46, 0x66, 0x97,
0x02, 0x62, 0x4e, 0xf5, 0xe9, 0xa7, 0xe8, 0x51,
0x18, 0xea, 0x20, 0x6a, 0x71, 0x50, 0x61, 0x3f,
0x94, 0xff, 0x61, 0x76, 0x05, 0xdf, 0x45, 0x82,
0x22, 0x3b, 0xf4, 0xa9, 0x76, 0x28, 0xbb, 0x82,
0xb2, 0xfd, 0xcc, 0x6f, 0x78, 0x32, 0xca, 0x3f,
0xb2, 0x3e, 0xe6, 0xe6, 0xaf, 0xed, 0x40, 0x97,
0x88, 0x06, 0x68, 0x0c, 0x51, 0x72, 0x71, 0xd2,
0xe0, 0x04, 0x41,
[04e0d27172510c6806889740edafe6e63eb23fca32786fccfdb282bb2876a9f43b228245df057661ff943f6150716a20ea1851e8a7e9f54e620297664618438dae OP_CHECKSIG]
abCFzjNoXHYxVQYP1WDBwpxgPCXzfqoxv7 +100000000000

View File

@@ -0,0 +1,131 @@
package parameters
// network genesis info
var (
MainnetGenesisHash = []byte{
0xc7, 0xcc, 0x40, 0xc7, 0xc5, 0x4f, 0xd1, 0x39,
0x1d, 0xdf, 0x3a, 0xe7, 0xcf, 0x98, 0xf2, 0x8b,
0x23, 0xcf, 0xfd, 0x0c, 0x66, 0xd3, 0x04, 0xc9,
0xaa, 0xd3, 0xba, 0xfc, 0xf0, 0x09, 0x00, 0x00,
}
MainnetGenesisBlock = []byte{
0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0xb7, 0x9a, 0x9b, 0x6f,
0x31, 0xa9, 0xd7, 0xd2, 0x5a, 0x1c, 0x4b, 0x0e,
0xc7, 0xa6, 0x71, 0xdc, 0x56, 0xce, 0x76, 0x63,
0xc3, 0x80, 0xf2, 0xd2, 0x51, 0x3a, 0x8e, 0x65,
0xe4, 0xea, 0x43, 0xc8, 0xdc, 0xec, 0xc9, 0x53,
0xff, 0xff, 0x0f, 0x1e, 0x81, 0x02, 0x01, 0x00,
0x01, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff,
0xff, 0xff, 0x3a, 0x04, 0xff, 0xff, 0x00, 0x1d,
0x01, 0x04, 0x32, 0x4e, 0x59, 0x54, 0x69, 0x6d,
0x65, 0x73, 0x20, 0x32, 0x30, 0x31, 0x34, 0x2d,
0x30, 0x37, 0x2d, 0x31, 0x39, 0x20, 0x2d, 0x20,
0x44, 0x65, 0x6c, 0x6c, 0x20, 0x42, 0x65, 0x67,
0x69, 0x6e, 0x73, 0x20, 0x41, 0x63, 0x63, 0x65,
0x70, 0x74, 0x69, 0x6e, 0x67, 0x20, 0x42, 0x69,
0x74, 0x63, 0x6f, 0x69, 0x6e, 0xff, 0xff, 0xff,
0xff, 0x01, 0x00, 0xe8, 0x76, 0x48, 0x17, 0x00,
0x00, 0x00, 0x43, 0x41, 0x04, 0xe0, 0xd2, 0x71,
0x72, 0x51, 0x0c, 0x68, 0x06, 0x88, 0x97, 0x40,
0xed, 0xaf, 0xe6, 0xe6, 0x3e, 0xb2, 0x3f, 0xca,
0x32, 0x78, 0x6f, 0xcc, 0xfd, 0xb2, 0x82, 0xbb,
0x28, 0x76, 0xa9, 0xf4, 0x3b, 0x22, 0x82, 0x45,
0xdf, 0x05, 0x76, 0x61, 0xff, 0x94, 0x3f, 0x61,
0x50, 0x71, 0x6a, 0x20, 0xea, 0x18, 0x51, 0xe8,
0xa7, 0xe9, 0xf5, 0x4e, 0x62, 0x02, 0x97, 0x66,
0x46, 0x18, 0x43, 0x8d, 0xae, 0xac, 0x00, 0x00,
0x00, 0x00,
}
TestnetGenesisHash = []byte{
0xdf, 0x0c, 0xb3, 0x5f, 0x69, 0x72, 0x75, 0xe1,
0x8f, 0x66, 0xa2, 0x7d, 0xc8, 0xbb, 0x12, 0xfa,
0x85, 0x4d, 0xed, 0x22, 0x2c, 0x0c, 0x1b, 0xf9,
0x5e, 0xa3, 0xba, 0xec, 0x41, 0x0e, 0x00, 0x00,
}
TestnetGenesisBlock = []byte{
0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0xb7, 0x9a, 0x9b, 0x6f,
0x31, 0xa9, 0xd7, 0xd2, 0x5a, 0x1c, 0x4b, 0x0e,
0xc7, 0xa6, 0x71, 0xdc, 0x56, 0xce, 0x76, 0x63,
0xc3, 0x80, 0xf2, 0xd2, 0x51, 0x3a, 0x8e, 0x65,
0xe4, 0xea, 0x43, 0xc8, 0x84, 0xea, 0xc9, 0x53,
0xff, 0xff, 0x0f, 0x1e, 0x18, 0xdf, 0x1a, 0x00,
0x01, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff,
0xff, 0xff, 0x3a, 0x04, 0xff, 0xff, 0x00, 0x1d,
0x01, 0x04, 0x32, 0x4e, 0x59, 0x54, 0x69, 0x6d,
0x65, 0x73, 0x20, 0x32, 0x30, 0x31, 0x34, 0x2d,
0x30, 0x37, 0x2d, 0x31, 0x39, 0x20, 0x2d, 0x20,
0x44, 0x65, 0x6c, 0x6c, 0x20, 0x42, 0x65, 0x67,
0x69, 0x6e, 0x73, 0x20, 0x41, 0x63, 0x63, 0x65,
0x70, 0x74, 0x69, 0x6e, 0x67, 0x20, 0x42, 0x69,
0x74, 0x63, 0x6f, 0x69, 0x6e, 0xff, 0xff, 0xff,
0xff, 0x01, 0x00, 0xe8, 0x76, 0x48, 0x17, 0x00,
0x00, 0x00, 0x43, 0x41, 0x04, 0xe0, 0xd2, 0x71,
0x72, 0x51, 0x0c, 0x68, 0x06, 0x88, 0x97, 0x40,
0xed, 0xaf, 0xe6, 0xe6, 0x3e, 0xb2, 0x3f, 0xca,
0x32, 0x78, 0x6f, 0xcc, 0xfd, 0xb2, 0x82, 0xbb,
0x28, 0x76, 0xa9, 0xf4, 0x3b, 0x22, 0x82, 0x45,
0xdf, 0x05, 0x76, 0x61, 0xff, 0x94, 0x3f, 0x61,
0x50, 0x71, 0x6a, 0x20, 0xea, 0x18, 0x51, 0xe8,
0xa7, 0xe9, 0xf5, 0x4e, 0x62, 0x02, 0x97, 0x66,
0x46, 0x18, 0x43, 0x8d, 0xae, 0xac, 0x00, 0x00,
0x00, 0x00,
}
RegtestnetGenesisHash = []byte{
0x81, 0x91, 0x37, 0x60, 0xab, 0x59, 0x85, 0x57,
0x7b, 0x23, 0x4d, 0xf6, 0xe2, 0x65, 0xba, 0x6b,
0x48, 0x7e, 0x66, 0x25, 0xc8, 0x52, 0x2a, 0xdc,
0x83, 0xa1, 0x0e, 0x22, 0x9e, 0xb7, 0xe9, 0x69,
}
RegtestnetGenesisBlock = []byte{
0x02, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0xb7, 0x9a, 0x9b, 0x6f,
0x31, 0xa9, 0xd7, 0xd2, 0x5a, 0x1c, 0x4b, 0x0e,
0xc7, 0xa6, 0x71, 0xdc, 0x56, 0xce, 0x76, 0x63,
0xc3, 0x80, 0xf2, 0xd2, 0x51, 0x3a, 0x8e, 0x65,
0xe4, 0xea, 0x43, 0xc8, 0xd4, 0xe5, 0xc9, 0x53,
0xff, 0xff, 0x7f, 0x20, 0x01, 0x00, 0x00, 0x00,
0x01, 0x01, 0x00, 0x00, 0x00, 0x01, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00,
0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0xff, 0xff,
0xff, 0xff, 0x3a, 0x04, 0xff, 0xff, 0x00, 0x1d,
0x01, 0x04, 0x32, 0x4e, 0x59, 0x54, 0x69, 0x6d,
0x65, 0x73, 0x20, 0x32, 0x30, 0x31, 0x34, 0x2d,
0x30, 0x37, 0x2d, 0x31, 0x39, 0x20, 0x2d, 0x20,
0x44, 0x65, 0x6c, 0x6c, 0x20, 0x42, 0x65, 0x67,
0x69, 0x6e, 0x73, 0x20, 0x41, 0x63, 0x63, 0x65,
0x70, 0x74, 0x69, 0x6e, 0x67, 0x20, 0x42, 0x69,
0x74, 0x63, 0x6f, 0x69, 0x6e, 0xff, 0xff, 0xff,
0xff, 0x01, 0x00, 0xe8, 0x76, 0x48, 0x17, 0x00,
0x00, 0x00, 0x43, 0x41, 0x04, 0xe0, 0xd2, 0x71,
0x72, 0x51, 0x0c, 0x68, 0x06, 0x88, 0x97, 0x40,
0xed, 0xaf, 0xe6, 0xe6, 0x3e, 0xb2, 0x3f, 0xca,
0x32, 0x78, 0x6f, 0xcc, 0xfd, 0xb2, 0x82, 0xbb,
0x28, 0x76, 0xa9, 0xf4, 0x3b, 0x22, 0x82, 0x45,
0xdf, 0x05, 0x76, 0x61, 0xff, 0x94, 0x3f, 0x61,
0x50, 0x71, 0x6a, 0x20, 0xea, 0x18, 0x51, 0xe8,
0xa7, 0xe9, 0xf5, 0x4e, 0x62, 0x02, 0x97, 0x66,
0x46, 0x18, 0x43, 0x8d, 0xae, 0xac, 0x00, 0x00,
0x00, 0x00,
}
)

View File

@@ -0,0 +1,57 @@
package parameters
import (
"encoding/hex"
"fmt"
"testing"
)
var (
mainnetGenesisHash, _ = hex.DecodeString(`000009f0fcbad3aac904d3660cfdcf238bf298cfe73adf1d39d14fc5c740ccc7`)
mainnetGenesisBlock, _ = hex.DecodeString(`020000000000000000000000000000000000000000000000000000000000000000000000b79a9b6f31a9d7d25a1c4b0ec7a671dc56ce7663c380f2d2513a8e65e4ea43c8dcecc953ffff0f1e810201000101000000010000000000000000000000000000000000000000000000000000000000000000ffffffff3a04ffff001d0104324e5954696d657320323031342d30372d3139202d2044656c6c20426567696e7320416363657074696e6720426974636f696effffffff0100e8764817000000434104e0d27172510c6806889740edafe6e63eb23fca32786fccfdb282bb2876a9f43b228245df057661ff943f6150716a20ea1851e8a7e9f54e620297664618438daeac00000000`)
testnetGenesisHash, _ = hex.DecodeString(`00000e41ecbaa35ef91b0c2c22ed4d85fa12bbc87da2668fe17572695fb30cdf`)
testnetGenesisBlock, _ = hex.DecodeString(`020000000000000000000000000000000000000000000000000000000000000000000000b79a9b6f31a9d7d25a1c4b0ec7a671dc56ce7663c380f2d2513a8e65e4ea43c884eac953ffff0f1e18df1a000101000000010000000000000000000000000000000000000000000000000000000000000000ffffffff3a04ffff001d0104324e5954696d657320323031342d30372d3139202d2044656c6c20426567696e7320416363657074696e6720426974636f696effffffff0100e8764817000000434104e0d27172510c6806889740edafe6e63eb23fca32786fccfdb282bb2876a9f43b228245df057661ff943f6150716a20ea1851e8a7e9f54e620297664618438daeac00000000`)
regtestnetGenesisHash, _ = hex.DecodeString(`69e9b79e220ea183dc2a52c825667e486bba65e2f64d237b578559ab60379181`)
regtestnetGenesisBlock, _ = hex.DecodeString(`020000000000000000000000000000000000000000000000000000000000000000000000b79a9b6f31a9d7d25a1c4b0ec7a671dc56ce7663c380f2d2513a8e65e4ea43c8d4e5c953ffff7f20010000000101000000010000000000000000000000000000000000000000000000000000000000000000ffffffff3a04ffff001d0104324e5954696d657320323031342d30372d3139202d2044656c6c20426567696e7320416363657074696e6720426974636f696effffffff0100e8764817000000434104e0d27172510c6806889740edafe6e63eb23fca32786fccfdb282bb2876a9f43b228245df057661ff943f6150716a20ea1851e8a7e9f54e620297664618438daeac00000000`)
)
func TestGenesisToHex(t *testing.T) {
printByteAssignments("mainnetGenesisHash", *rev(mainnetGenesisHash))
printByteAssignments("mainnetGenesisBlock", mainnetGenesisBlock)
printByteAssignments("testnetGenesisHash", *rev(testnetGenesisHash))
printByteAssignments("testnetGenesisBlock", testnetGenesisBlock)
printByteAssignments("regtestnetGenesisHash", *rev(regtestnetGenesisHash))
printByteAssignments("regtestnetGenesisBlock", regtestnetGenesisBlock)
}
func printByteAssignments(name string, in []byte) {
fmt.Print(name, "=[]byte{\n")
printGoHexes(in)
fmt.Print("}\n")
}
func printGoHexes(in []byte) {
fmt.Print("\t")
for i := range in {
if i%8 == 0 && i != 0 {
fmt.Print("\n\t")
}
fmt.Printf("0x%02x, ", in[i])
}
fmt.Println()
}
func rev(in []byte) (out *[]byte) {
o := make([]byte, len(in))
out = &o
for i := range in {
(*out)[len(in)-i-1] = in[i]
}
return
}
// func hx(// in []byte) string {
// return hex.EncodeToString(in)
// }
// func split(// in []byte, pos int) (out []byte, piece []byte) {
// out = in[pos:]
// piece = in[:pos]
// return
// }

527
cmd/wallet/CHANGES Normal file
View File

@@ -0,0 +1,527 @@
============================================================================
User visible changes for btcwallet
A wallet daemon for pod, written in Go
============================================================================
Changes in 0.7.0 (Mon Nov 23 2015)
- New features:
- Wallet will now detect network inactivity and reconnect to the pod
RPC server if the connection was lost (#320)
- Bug fixes:
- Removed data races in the RPC server (#292) and waddrmgr package
(#293)
- Corrected handling of btcutil.AddressPubKey addresses when querying
for a ManagedAddress from the address manager (#313)
- Fixed signmessage and verifymessage algorithm to match the equivalent
algorithms used by Core (#324)
- Notable developer-related changes:
- Added support for AppVeyor continuous integration (#299)
- Take advantage of optimized zeroing from the Go 1.5 release (#286)
- Added IsError function to waddrmgr to check that an error is a
ManagerError and contains a matching error code (#289). Simplified
error handling in the wallet package and RPC server with this function
(#290).
- Switched to using a more space efficient data structure for the
wtxmgr CreditRecord type (#295)
- Incorporated latest updates to the votingpool package (#315)
- Miscellaneous:
- Updated websocket notification handlers to latest API required by
pod (#294)
- Enabled the logging subsystem of the rpcclient package (#328)
- Contributors (alphabetical order):
- Alex Yocom-Piatt
- cjepson
- Dave Collins
- John C. Vernaleo
- Josh Rickmar
- Rune T. Aune
Changes in 0.6.0 (Wed May 27 2015)
- New features:
- Add initial account support (#155):
- Add account names for each account number
- Create initial account with the "default" name
- Create new accounts using the createnewaccount RPC
- All accounts (with the exception of the imported account) may be
renamed using the renameaccount RPC
- RPC requests with an unspecified account that default to the unnamed
account in Bitcoin Core Wallet default to "default", the name of the
initial account
- Several RPCs with account parameters do not work with btcwallet
accounts due to concerns over expectations of API compatibility with
Bitcoin Core Wallet. A new RPC API is being planned to rectify this
(#220).
- Store transactions, transaction history, and spend tracking in the
database (#217, #234)
- A full rescan is required when updating from previous wallet
versions to rebuild the transaction history
- Add utility (cmd/dropwtxmgr) to drop transaction history and force a
rescan (#234)
- Implement the help RPC to return single line usages of all wallet and
pod server requests as well as detailed usage for a single request
- Bug fixes:
- Handle chain reorgs by unconfirming transactions from removed blocks
(#248)
- Rollback all transaction history when none of the saved recently seen
block hashes are known to pod (#234, #281)
- Prevent the situation where the default account was renamed but cannot
be renamed back to "" or "default" by removing the special case naming
policy for the default account (#253)
- Create the initial account address if needed when calling the
getaccountaddress RPC (#238)
- Prevent listsinceblock RPC from including all listtransactions result
objects for all transactions since the genesis block (fix included in
#227)
- Add missing fields to listtransactions and gettransaction RPC results
(#265)
- Remove target confirmations limit on listsinceblock results (#266)
- Add JSON array to report errors creating input signature for
signrawtransaction RPC (#267)
- Use negative fees with listtransactions result types (#272)
- Prevent duplicate wallet lock attempt after timeout if explicitly
locked (#275)
- Use correct RPC server JSON-RPC error code for incorrect passphrases
with a walletpassphrase request (#284)
- Regressions:
- Inserting transactions and marking outputs as controlled by wallet in
the new transaction database is extremely slow compared to the previous
in-memory implementation. Later versions may improve this performance
regression by using write-ahead logging (WAL) and performing more
updates at a time under a single database transaction.
- Notable developer-related changes:
- Relicense all code to the btcsuite developers (#258)
- Replace txstore package with wtxmgr, the walletdb-based transaction
store (#217, #234)
- Add Cursor API to walletdb for forwards and backwards iteration over
a bucket (included in #234)
- Factor out much of main's wallet.go into a wallet package (#213,
#276, #255)
- Convert RPC server and client to btcjson v2 API (#233, #227)
- Help text and single line usages for the help RPC are pregenerated
from descriptions in the internal/rpchelp package and saved as
globals in main. Help text must be regenerated (using `go generate`)
each time the btcjson struct tags change or the help definitions are
modified.
- Add additional features to the votingpool package:
- Implement StartWithdrawal API to begin an Open Transactions
withdrawal (#178)
- Add internal APIs to store withdrawal transactions in the wallet's
transaction database (#221)
- Addresses marked as used after appearing publicly on the blockchain or
in mempool; required for future single-use address support (#207)
- Modified waddrmgr APIs to use ForEach functions to iterate over
address strings and managed addresses to improve scability (#216)
- Move legacy directory under internal directory to prevent importing
of unmaintained packages (enforced since Go 1.5) (#285)
- Improve test coverage in the waddrmgr and wtxmgr packages (#239, #217)
- Contributors (alphabetical order):
- Dave Collins
- Guilherme Salgado
- Javed Khan
- Josh Rickmar
- Manan Patel
Changes in 0.5.1 (Fri Mar 06 2015)
- New features:
- Add flag (--createtemp) to create a temporary simnet wallet
- Bug fixes:
- Mark newly received transactions confirmed when the wallet is initially
created or opened with no addresses
- Notable developer-related changes:
- Refactor the address manager database upgrade paths for easier future
upgrades
- Private key zeroing functions consolidated into the internal zero package
and optimized
Changes in 0.5.0 (Tue Mar 03 2015)
- New features:
- Add a new address manager package (waddrmgr) to replace the previous
wallet/keystore package:
- BIP0032 hierarchical deterministic keys
- BIP0043/BIP0044 multi-account hierarchy
- Strong focus on security:
- Wallet master encryption keys protected by scrypt PBKDF
- NaCl-based secretbox cryptography (XSalsa20 and Poly1305)
- Mandatory encryption of private keys and P2SH redeeming scripts
- Optional encryption of public data, including extended public keys
and addresses
- Different crypto keys for redeeming scripts to mitigate cryptanalysis
- Hardened against memory scraping through the use of actively clearing
private material from memory when locked
- Different crypto keys used for public, private, and script data
- Ability for different passphrases for public and private data
- Multi-tier scalable key design to allow instant password changes
regardless of the number of addresses stored
- Import WIF keys
- Import pay-to-script-hash scripts for things such as multi-signature
transactions
- Ability to export a watching-only version which does not contain any
private key material
- Programmatically detectable errors, including encapsulation of errors
from packages it relies on
- Address synchronization capabilities
- Add a new namespaced database package (walletdb):
- Key/value store
- Namespace support
- Allows multiple packages to have their own area in the database without
worrying about conflicts
- Read-only and read-write transactions with both manual and managed modes
- Nested buckets
- Supports registration of backend databases
- Comprehensive test coverage
- Replace the createencryptedwallet RPC with a wizard-style prompt
(--create) to create a new walletdb-backed wallet file and import keys
from the old Armory wallet file (if any)
- Transaction creation changes:
- Drop default transaction fee to 0.00001 DUO per kB
- Use standard script flags provided by the txscript package for
transaction creation and sanity checking
- Randomize change output index
- Includes amounts (total spendable, total needed, and fee) in all
insufficient funds errors
- Add support for simnet, the private simulation test network
- Implement the following Bitcoin Core RPCs:
- listreceivedbyaddress (#53)
- lockunspent, listlockunspent (#50, #55)
- getreceivedbyaddress
- listreceivedbyaccount
- Reimplement pod RPCs which return the best block to use the block most
recently processed by wallet to avoid confirmation races:
- getbestblockhash
- getblockcount
- Perform clean shutdown on interrupt or when a stop RPC is received (#69)
- Throttle the number of connected HTTP POST and websocket client
connections (tunable using the rpcmaxclients and rpcmaxwebsockets config
options)
- Provide the ability to disable TLS when connecting to a localhost pod or
serving localhost clients
- Rescan improvements:
- Add a rescan notification for when the rescan has completed and no more
rescan notifications are expected (#99)
- Use the most recent partial sync height from a rescan progress
notification when a rescan is restarted after the pod connection is lost
- Force a rescan if the transaction store cannot be opened (due to a
missing file or if the deserialization failed)
- RPC compatibility improvements:
- Allow the use of the `*` account name to refer to all accounts
- Make the account parameter optional for the getbalance and
listalltransactions requests
- Add iswatchonly field to the validateaddress response result
- Check address equivalence in verifymessage by comparing pubkeys and pubkey
hashes rather than requiring the address being verified to be one
controlled by the wallet and using its private key for verification
- Bug fixes:
- Prevent an out-of-bounds panic when handling a gettransaction RPC.
- Prevent a panic on client disconnect (#110).
- Prevent double spending coins when creating multiple transactions at once
by serializing access to the transaction creation logic (#120)
- Mark unconfirmed transaction credits as spent when another unconfirmed
transaction spends one (#91)
- Exclude immature coinbase outputs from listunspent results (#103)
- Fix several data and logic races during sync with pod (#101)
- Avoid a memory issue from incorrect slice usage which caused both
duplicate and missing blocks in the transaction store when middle
inserting transactions from a new block
- Only spend P2PKH outputs when creating sendfrom/sendmany/sendtoaddress
transactions (#89)
- Return the correct UTXO set when fetching all wallet UTXOs by fixing an
incorrect slice append
- Remove a deadlock caused by filling the pod notification channel (#100)
- Avoid a confirmation race by using the most recently processed block in
RPC handlers, rather than using the most recently notified block by pod
- Marshal empty JSON arrays as `[]` instead of the JSON `null` by using
empty, non-nil Go slices
- Flush logs and run all deferred functions before main returns and the
process exits
- Sync temporary transaction store flat file before closing and renaming
- Accept hex strings with an odd number of characters
- Notable developer-related changes:
- Switch from the go.net websocket package to gorilla websockets
- Refactor the RPC server:
- Move several global variables to the RPCServer struct
- Dynamically look up appropriate handlers for the current pod connection
status and wallet sync state
- Begin creating websocket notifications by sending to one of many
notification channels in the RPCServer struct, which are in turn
marshalled and broadcast to each websocket client
- Separate the RPC client code into the chain package:
- Uses rpcclient for a pod websocket RPC client
- Converts all notification callbacks to typed messages sent over channels
- Uses an unbounded queue for waiting notifications
- Import a new voting pool package (votingpool):
- Create and fetch voting pools and series from a walletdb namespace
- Generate deposit addresses utilizing m-of-n multisig P2SH scripts
- Improve transaction creation readability by splitting a monolithic
function into several smaller ones
- Check and handle all errors in some way, or explicitly comment why a
particular error was left unchecked
- Simplify RPC error handling by wrapping specific errors in unique types to
create an appropriate btcjson error before the response is marshalled
- Add a map of unspent outputs (keyed by outpoint) to the transaction store
for quick lookup of any UTXO and access to the full wallet UTXO set
without iterating over many transactions looking for unspent credits
- Modify several data structures and function signatures have been modified
to reduce the number of needed allocations and be more cache friendly
- Miscellaneous:
- Rewrite paths relative to the data directory when an alternate data
directory is provided on the command line
- Switch the websocket endpoint to `ws` to match pod
- Remove the getaddressbalance extension RPC to discourage address reuse and
encourage watching for expected payments by using listunspent
- Increase transaction creation performance by moving the sorting of
transaction outputs by their amount out of an inner loop
- Add additional logging to the transaction store:
- Log each transaction added to the store
- Log each previously unconfirmed transaction that is mined
- [debug] Log which previous outputs are marked spent by a newly inserted
debiting transaction
- [debug] Log each transaction that is removed in a rollback
- Only log rollbacks if transactions are reorged out of the old chain
- Save logs to network-specific directories
(e.g. ~/.btcwallet/logs/testnet3) to match pod behavior (#114)
Changes in 0.4.0 (Sun May 25 2014)
- Implement the following standard bitcoin server RPC requests:
- signmessage (https://github.com/conformal/btcwallet/issues/58)
- verifymessage (https://github.com/conformal/btcwallet/issues/61)
- listunspent (https://github.com/conformal/btcwallet/issues/54)
- validateaddress (https://github.com/conformal/btcwallet/issues/60)
- addressmultisig (https://github.com/conformal/btcwallet/issues/37)
- createmultisig (https://github.com/conformal/btcwallet/issues/37)
- signrawtransaction (https://github.com/conformal/btcwallet/issues/59)
- Add authenticate extension RPC request to authenticate a websocket
session without requiring the use of the HTTP Authorization header
- Add podusername and podpassword options to allow separate
authentication credentials from wallet clients when authenticating to a
pod websocket RPC server
- Fix RPC response passthrough: JSON unmarshaling and marshaling is now
delayed until necessary and JSON result objects from pod are sent to
clients directly without an extra decode+encode that may change the
representation of large integer values
- Fix several websocket client connection issues:
- Disconnect clients are cleanly removed without hanging on any final
sends
- Set deadline for websocket client sends to prevent hanging on
misbehaving clients or clients with a bad connection
- Fix return result for dumprivkey by always padding the private key bytes
to a length of 32
- Fix rescan for transaction history for imported addresses
(https://github.com/conformal/btcwallet/issues/74)
- Fix listsinceblock request handler to consider the minimum confirmation
parameter (https://github.com/conformal/btcwallet/issues/80)
- Fix several RPC handlers which require an unlocked wallet to check
for an unlocked wallet before continuing
(https://github.com/conformal/btcwallet/issues/65)
- Fix handling for block rewards (coinbase transactions):
- Update listtransactions results to use "generate" category for
coinbase outputs
- Prevent inclusion of immature coinbase outputs for newly created
transactions
- Rewrite the transaction store to handle several issues regarding
transation malleability and performance issues
- The new transaction store is written to disk in a different format
then before, and upgrades will require a rescan to rebuild the
transaction history
- Improve rescan:
- Begin rescan with known UTXO set at start height
- Serialize executation of all rescan requests
- Merge waiting rescan jobs so all jobs can be handled with a single
rescan
- Support parially synced addresses in the keystore and incrementally
mark rescan progress. If a rescan is unable to continue (wallet
closes, pod disconnects, etc.) a new rescan can start at the last
synced chain height
- Notify (with an unsolicited notification) websocket clients of pod
connection state
- Improve logging:
- Log reason for disconnecting a websocket client
- Updates for pod websocket API changes
- Stability fixes, internal API changes, general code cleanup, and comment
corrections
Changes in 0.3.0 (Mon Feb 10 2014)
- Use correct hash algorithm for chained addresses (fixes a bug where
address chaining was still deterministic, but forked from Armory and
previous btcwallet implementations)
- Change websocket endpoint to connect to pod 0.6.0-alpha
- Redo server implementation to serialize handling of client requests
- Redo account locking to greatly reduce btcwallet lockups caused by
incorrect mutex usage
- Open all accounts, rather than just the default account, at startup
- Generate new addresses using pubkey chaining if keypool is depleted and
wallet is locked
- Make maximum keypool size a configuration option (keypoolsize)
- Add disallowfree configuration option (default false) to force adding
the minimum fee to all outbound transactions
- Implement the following standard bitcoin server RPC requests:
- getinfo (https://github.com/conformal/btcwallet/issues/63)
- getrawchangeaddress (https://github.com/conformal/btcwallet/issues/41)
- getreceivedbyaccount (https://github.com/conformal/btcwallet/issues/42)
- gettransaction (https://github.com/conformal/btcwallet/issues/44)
- keypoolrefill (https://github.com/conformal/btcwallet/issues/48)
- listsinceblock (https://github.com/conformal/btcwallet/issues/52)
- sendtoaddress (https://github.com/conformal/btcwallet/issues/56)
- Add empty (unimplemented) handlers for the following RPC requests so
requests are not passed down to pod:
- getblocktemplate
- getwork
- stop
- Add RPC extension request, exportwatchingwallet, to export an account
with a watching-only wallet from an account with a hot wallet that
may be used by a separate btcwallet instance
- Require all account wallets to share the same passphrase
- Change walletlock and walletpassphrase RPC requests to lock or unlock
all account wallets
- Allow opening accounts with watching-only wallets
- Return txid for sendfrom RPC requests
(https://github.com/conformal/btcwallet/issues/64)
- Rescan imported private keys in background
(https://github.com/conformal/btcwallet/issues/34)
- Do not import duplicate private keys
(https://github.com/conformal/btcwallet/issues/35)
- Write all three account files for a new account, rather than just
the wallet (https://github.com/conformal/btcwallet/issues/30)
- Create any missing directories before writing autogenerated certificate
pair
- Fix rescanning of a new account's root address
- Fix error in the wallet file serialization causing duplicate address
encryption attempts
- Fix issue calculating eligible transaction inputs caused by a bad
confirmation check
- Fix file locking issue on Windows caused by not closing files before
renaming
- Fix typos in README file
Changes in 0.2.1 (Thu Jan 10 2014)
- Fix a mutex issue which caused btcwallet to lockup on all
RPC requests needing to read or write an account
Changes in 0.2.0 (Thu Jan 09 2014)
- Enable mainnet support (disabled by default, use --mainnet to enable)
- Don't hardcode localhost pod connections. Instead, add a --connect
option to specify the hostname or address and port of a local or
remote pod instance
(https://github.com/conformal/btcwallet/issues/1)
- Remove --serverport port and replace with --listen. This option works
just like pod's --rpclisten and allows to specify the interfaces to
listen for RPC connections
- Require TLS and Basic HTTP authentication before wallet can be
controlled over RPC
- Refill keypool if wallet is unlocked and keypool is emptied
- Detect and rollback saved tx/utxo info after pod performs blockchain
reorganizations while btcwallet was disconnected
- Add support for the following standard bitcoin JSON-RPC calls:
- dumpprivkey (https://github.com/conformal/btcwallet/issues/9)
- getaccount
- getaccountaddress
- importprivkey (https://github.com/conformal/btcwallet/issues/2)
- listtransactions (https://github.com/conformal/btcwallet/issues/12)
- Add several extension RPC calls for websocket connections:
- getaddressbalance: get the balance associated with a single address
- getunconfirmedbalance: get total balance for unconfirmed transactions
- listaddresstransactions: list transactions for a single address
(https://github.com/conformal/btcwallet/issues/27)
- listalltransactions: lists all transactions without specifying a range
- Make RPC extensions available only to websocket connections, with the
exception of createencryptedwallet
- Add dummy handlers for unimplemented wallet RPC calls
(https://github.com/conformal/btcwallet/issues/29)
- Add socks5/tor proxy support
- Calculate and add minimum transaction fee to created transactions
- Use OS-specific rename calls to provide atomic file renames which
can replace a currently-existing file
(https://github.com/conformal/btcwallet/issues/20)
- Move account files to a single directory per bitcoin network to
prevent a future scaling issue
(https://github.com/conformal/btcwallet/issues/16)
- Fix several data races and mutex mishandling
- Fix a bug where the RPC server hung on requests requiring pod
when a pod connection was never established
- Fix a bug where creating account files did not create all necessary
directories (https://github.com/conformal/btcwallet/issues/15)
- Fix a bug where '~' did not expand to a home or user directory
(https://github.com/conformal/btcwallet/issues/17)
- Fix a bug where returning account names as strings did not remove
trailing ending 0s
- Fix a bug where help usage was displayed twice using the -h or --help
flag
- Fix sample listening address in sample configuration file
- Update sample configuration file with all available options with
descriptions and defaults for each
Initial Release 0.1.0 (Wed Nov 13 2013)
- Initial release

17
cmd/wallet/LICENSE Normal file
View File

@@ -0,0 +1,17 @@
ISC License
Copyright (c) 2018- The Parallelcoin Team
Copyright (c) 2013-2017 The btcsuite developers
Copyright (c) 2015-2016 The Decred developers
Permission to use, copy, modify, and distribute this software for any
purpose with or without fee is hereby granted, provided that the above
copyright notice and this permission notice appear in all copies.
THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.

221
cmd/wallet/README.md Executable file
View File

@@ -0,0 +1,221 @@
pod wallet
=========
[![Build Status](https://travis-ci.org/btcsuite/btcwallet.png?branch=master)](https://travis-ci.org/btcsuite/btcwallet)
[![Build status](https://ci.appveyor.com/api/projects/status/88nxvckdj8upqr36/branch/master?svg=true)](https://ci.appveyor.com/project/jrick/btcwallet/branch/master)
btcwallet is a daemon handling bitcoin wallet functionality for a single user.
It acts as both an RPC client to pod and an RPC server for wallet clients and
legacy RPC applications.
Public and private keys are derived using the hierarchical deterministic format
described by
[BIP0032](https://github.com/bitcoin/bips/blob/master/bip-0032.mediawiki).
Unencrypted private keys are not supported and are never written to disk.
btcwallet uses the
`m/44'/<coin type>'/<account>'/<branch>/<address index>`
HD path for all derived addresses, as described by
[BIP0044](https://github.com/bitcoin/bips/blob/master/bip-0044.mediawiki).
Due to the sensitive nature of public data in a BIP0032 wallet, btcwallet
provides the option of encrypting not just private keys, but public data as
well. This is intended to thwart privacy risks where a wallet file is
compromised without exposing all current and future addresses (public keys)
managed by the wallet. While access to this information would not allow an
attacker to spend or steal coins, it does mean they could track all transactions
involving your addresses and therefore know your exact balance. In a future
release, public data encryption will extend to transactions as well.
btcwallet is not an SPV client and requires connecting to a local or remote pod
instance for asynchronous blockchain queries and notifications over websockets.
Full pod installation instructions can be
found [here](https://github.com/parallelcointeam/parallelcoin). An alternative
SPV mode that is compatible with pod and Bitcoin Core is planned for a future
release.
Wallet clients can use one of two RPC servers:
1. A legacy JSON-RPC server mostly compatible with Bitcoin Core
The JSON-RPC server exists to ease the migration of wallet applications from
Core, but complete compatibility is not guaranteed. Some portions of the
API (and especially accounts) have to work differently due to other design
decisions (mostly due to BIP0044). However, if you find a compatibility issue
and feel that it could be reasonably supported, please report an issue. This
server is enabled by default.
2. An experimental gRPC server
The gRPC server uses a new API built for btcwallet, but the API is not
stabilized and the server is feature gated behind a config option
(`--experimentalrpclisten`). If you don't mind applications breaking due to
API changes, don't want to deal with issues of the legacy API, or need
notifications for changes to the wallet, this is the RPC server to use. The
gRPC server is documented [here](./rpc/documentation/README.md).
## Installation and updating
### Windows - MSIs Available
Install the latest MSIs available here:
https://github.com/p9c/p9/releases
https://github.com/p9c/p9/walletmain/releases
### Windows/Linux/BSD/POSIX - Build from source
Building or updating from source requires the following build dependencies:
- **Go 1.5 or 1.6**
Installation instructions can be found here: http://golang.org/doc/install. It
is recommended to add `$GOPATH/bin` to your `PATH` at this point.
**Note:** If you are using Go 1.5, you must manually enable the vendor
experiment by setting the `GO15VENDOREXPERIMENT` environment variable to
`1`. This step is not required for Go 1.6.
- **Glide**
Glide is used to manage project dependencies and provide reproducible builds.
To install:
`go get -u github.com/Masterminds/glide`
Unfortunately, the use of `glide` prevents a handy tool such as `go get` from
automatically downloading, building, and installing the source in a single
command. Instead, the latest project and dependency sources must be first
obtained manually with `git` and `glide`, and then `go` is used to build and
install the project.
**Getting the source**:
For a first time installation, the project and dependency sources can be
obtained manually with `git` and `glide` (
create directories as needed):
```
git clone https://github.com/p9c/p9/walletmain $GOPATH/src/github.com/p9c/p9/walletmain
cd $GOPATH/src/github.com/p9c/p9/walletmain
glide install
```
To update an existing source tree, pull the latest changes and install the
matching dependencies:
```
cd $GOPATH/src/github.com/p9c/p9/walletmain
git pull
glide install
```
**Building/Installing**:
The `go` tool is used to build or install (to `GOPATH`) the project. Some
example build instructions are provided below (all must run from the `btcwallet`
project directory).
To build and install `btcwallet` and all helper commands (in the `cmd`
directory) to `$GOPATH/bin/`, as well as installing all compiled packages to
`$GOPATH/pkg/` (**use this if you are unsure which command to run**):
```
go install . ./cmd/...
```
To build a `btcwallet` executable and install it to `$GOPATH/bin/`:
```
go install
```
To build a `btcwallet` executable and place it in the current directory:
```
go build
```
## Getting Started
The following instructions detail how to get started with btcwallet connecting
to a localhost pod. Commands should be run in `cmd.exe` or PowerShell on
Windows, or any terminal emulator on *nix.
- Run the following command to start pod:
```
pod -u rpcuser -P rpcpass
```
- Run the following command to create a wallet:
```
btcwallet -u rpcuser -P rpcpass --create
```
- Run the following command to start btcwallet:
```
btcwallet -u rpcuser -P rpcpass
```
If everything appears to be working, it is recommended at this point to copy the
sample pod and btcwallet configurations and update with your RPC username and
password.
PowerShell (Installed from MSI):
```
PS> cp "$env:ProgramFiles\Pod Suite\Pod\sample-pod.conf" $env:LOCALAPPDATA\Pod\pod.conf
PS> cp "$env:ProgramFiles\Pod Suite\Btcwallet\sample-btcwallet.conf" $env:LOCALAPPDATA\Btcwallet\btcwallet.conf
PS> $editor $env:LOCALAPPDATA\Pod\pod.conf
PS> $editor $env:LOCALAPPDATA\Btcwallet\btcwallet.conf
```
PowerShell (Installed from source):
```
PS> cp $env:GOPATH\src\github.com\btcsuite\pod\sample-pod.conf $env:LOCALAPPDATA\Pod\pod.conf
PS> cp $env:GOPATH\src\github.com\btcsuite\btcwallet\sample-btcwallet.conf $env:LOCALAPPDATA\Btcwallet\btcwallet.conf
PS> $editor $env:LOCALAPPDATA\Pod\pod.conf
PS> $editor $env:LOCALAPPDATA\Btcwallet\btcwallet.conf
```
Linux/BSD/POSIX (Installed from source):
```bash
$ cp $GOPATH/src/github.com/p9c/p9/sample-pod.conf ~/.pod/pod.conf
$ cp $GOPATH/src/github.com/p9c/p9/walletmain/sample-btcwallet.conf ~/.btcwallet/btcwallet.conf
$ $EDITOR ~/.pod/pod.conf
$ $EDITOR ~/.btcwallet/btcwallet.conf
```
## Issue Tracker
The [integrated github issue tracker](https://github.com/p9c/p9/walletmain/issues)
is used for this project.
## GPG Verification Key
All official release tags are signed by Conformal so users can ensure the code
has not been tampered with and is coming from the btcsuite developers. To verify
the signature perform the following:
- Download the public key from the Conformal website at
https://opensource.conformal.com/GIT-GPG-KEY-conformal.txt
- Import the public key into your GPG keyring:
```bash
gpg --import GIT-GPG-KEY-conformal.txt
```
- Verify the release tag with the following command where `TAG_NAME` is a
placeholder for the specific tag:
```bash
git tag -v TAG_NAME
```
## License
btcwallet is licensed under the liberal ISC License.

735
cmd/wallet/_config.go_ Normal file
View File

@@ -0,0 +1,735 @@
package wallet
import (
"time"
"github.com/urfave/cli"
)
// Config is the main configuration for wallet
type Config struct {
// General application behavior
ConfigFile *string `short:"C" long:"configfile" description:"Path to configuration file"`
ShowVersion *bool `short:"True" long:"version" description:"Display version information and exit"`
LogLevel *string
Create *bool `long:"create" description:"Create the wallet if it does not exist"`
CreateTemp *bool `long:"createtemp" description:"Create a temporary simulation wallet (pass=password) in the data directory indicated; must call with --datadir"`
AppDataDir *string `short:"A" long:"appdata" description:"Application data directory for wallet config, databases and logs"`
TestNet3 *bool `long:"testnet" description:"Use the test Bitcoin network (version 3) (default mainnet)"`
SimNet *bool `long:"simnet" description:"Use the simulation test network (default mainnet)"`
NoInitialLoad *bool `long:"noinitialload" description:"Defer wallet creation/opening on startup and enable loading wallets over RPC"`
LogDir *string `long:"logdir" description:"Directory to log output."`
Profile *string `long:"profile" description:"Enable HTTP profiling on given port -- NOTE port must be between 1024 and 65536"`
// GUI *bool `long:"__OLDgui" description:"Launch GUI"`
// Wallet options
WalletPass *string `long:"walletpass" default-mask:"-" description:"The public wallet password -- Only required if the wallet was created with one"`
// RPC client options
RPCConnect *string `short:"c" long:"rpcconnect" description:"Hostname/IP and port of pod RPC server to connect to (default localhost:11048, testnet: localhost:21048, simnet: localhost:41048)"`
CAFile *string `long:"cafile" description:"File containing root certificates to authenticate a TLS connections with pod"`
EnableClientTLS *bool `long:"clienttls" description:"Enable TLS for the RPC client"`
PodUsername *string `long:"podusername" description:"Username for pod authentication"`
PodPassword *string `long:"podpassword" default-mask:"-" description:"Password for pod authentication"`
Proxy *string `long:"proxy" description:"Connect via SOCKS5 proxy (eg. 127.0.0.1:9050)"`
ProxyUser *string `long:"proxyuser" description:"Username for proxy server"`
ProxyPass *string `long:"proxypass" default-mask:"-" description:"Password for proxy server"`
// SPV client options
UseSPV *bool `long:"usespv" description:"Enables the experimental use of SPV rather than RPC for chain synchronization"`
AddPeers *cli.StringSlice `short:"a" long:"addpeer" description:"Add a peer to connect with at startup"`
ConnectPeers *cli.StringSlice `long:"connect" description:"Connect only to the specified peers at startup"`
MaxPeers *int `long:"maxpeers" description:"Max number of inbound and outbound peers"`
BanDuration *time.Duration `long:"banduration" description:"How long to ban misbehaving peers. Valid time units are {s, m, h}. Minimum 1 second"`
BanThreshold *int `long:"banthreshold" description:"Maximum allowed ban score before disconnecting and banning misbehaving peers."`
// RPC server options
//
// The legacy server is still enabled by default (and eventually will be replaced with the experimental server) so
// prepare for that change by renaming the struct fields (but not the configuration options).
//
// Usernames can also be used for the consensus RPC client, so they aren't considered legacy.
RPCCert *string `long:"rpccert" description:"File containing the certificate file"`
RPCKey *string `long:"rpckey" description:"File containing the certificate key"`
OneTimeTLSKey *bool `long:"onetimetlskey" description:"Generate a new TLS certpair at startup, but only write the certificate to disk"`
EnableServerTLS *bool `long:"servertls" description:"Enable TLS for the RPC server"`
LegacyRPCListeners *cli.StringSlice `long:"rpclisten" description:"Listen for legacy RPC connections on this interface/port (default port: 11046, testnet: 21046, simnet: 41046)"`
LegacyRPCMaxClients *int `long:"rpcmaxclients" description:"Max number of legacy RPC clients for standard connections"`
LegacyRPCMaxWebsockets *int `long:"rpcmaxwebsockets" description:"Max number of legacy RPC websocket connections"`
Username *string `short:"u" long:"username" description:"Username for legacy RPC and pod authentication (if podusername is unset)"`
Password *string `short:"P" long:"password" default-mask:"-" description:"Password for legacy RPC and pod authentication (if podpassword is unset)"`
// EXPERIMENTAL RPC server options
//
// These options will change (and require changes to config files, etc.) when the new gRPC server is enabled.
ExperimentalRPCListeners *cli.StringSlice `long:"experimentalrpclisten" description:"Listen for RPC connections on this interface/port"`
// Deprecated options
DataDir *string `short:"b" long:"datadir" default-mask:"-" description:"DEPRECATED -- use appdata instead"`
}
// A bunch of constants
const ()
/*
// cleanAndExpandPath expands environement variables and leading ~ in the
// passed path, cleans the result, and returns it.
func cleanAndExpandPath(path string) string {
// NOTE: The os.ExpandEnv doesn't work with Windows cmd.exe-style
// %VARIABLE%, but they variables can still be expanded via POSIX-style
// $VARIABLE.
path = os.ExpandEnv(path)
if !strings.HasPrefix(path, "~") {
return filepath.Clean(path)
}
// Expand initial ~ to the current user's home directory, or ~otheruser to otheruser's home directory. On Windows, both forward and backward slashes can be used.
path = path[1:]
var pathSeparators string
if runtime.GOOS == "windows" {
pathSeparators = string(os.PathSeparator) + "/"
} else {
pathSeparators = string(os.PathSeparator)
}
userName := ""
if i := strings.IndexAny(path, pathSeparators); i != -1 {
userName = path[:i]
path = path[i:]
}
homeDir := ""
var u *user.User
var e error
if userName == "" {
u, e = user.Current()
} else {
u, e = user.Lookup(userName)
}
if e == nil {
homeDir = u.HomeDir
}
// Fallback to CWD if user lookup fails or user has no home directory.
if homeDir == "" {
homeDir = "."
}
return filepath.Join(homeDir, path)
}
// createDefaultConfig creates a basic config file at the given destination path.
// For this it tries to read the config file for the RPC server (either pod or
// sac), and extract the RPC user and password from it.
func createDefaultConfigFile(destinationPath, serverConfigPath,
serverDataDir, walletDataDir string) (e error) {
// fmt.Println("server config path", serverConfigPath)
// Read the RPC server config
serverConfigFile, e := os.Open(serverConfigPath)
if e != nil {
return e
}
defer serverConfigFile.Close()
content, e := ioutil.ReadAll(serverConfigFile)
if e != nil {
return e
}
// content := []byte(samplePodCtlConf)
// Extract the rpcuser
rpcUserRegexp, e := regexp.Compile(`(?m)^\s*rpcuser=([^\s]+)`)
if e != nil {
return e
}
userSubmatches := rpcUserRegexp.FindSubmatch(content)
if userSubmatches == nil {
// No user found, nothing to do
return nil
}
// Extract the rpcpass
rpcPassRegexp, e := regexp.Compile(`(?m)^\s*rpcpass=([^\s]+)`)
if e != nil {
return e
}
passSubmatches := rpcPassRegexp.FindSubmatch(content)
if passSubmatches == nil {
// No password found, nothing to do
return nil
}
// Extract the TLS
TLSRegexp, e := regexp.Compile(`(?m)^\s*tls=(0|1)(?:\s|$)`)
if e != nil {
return e
}
TLSSubmatches := TLSRegexp.FindSubmatch(content)
// Create the destination directory if it does not exists
e = os.MkdirAll(filepath.Dir(destinationPath), 0700)
if e != nil {
return e
}
// fmt.Println("config path", destinationPath)
// Create the destination file and write the rpcuser and rpcpass to it
dest, e := os.OpenFile(destinationPath,
os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0600)
if e != nil {
return e
}
defer dest.Close()
destString := fmt.Sprintf("username=%s\npassword=%s\n",
string(userSubmatches[1]), string(passSubmatches[1]))
if TLSSubmatches != nil {
fmt.Println("TLS is enabled but more than likely the certificates will
fail verification because of the CA.
Currently there is no adequate tool for this, but will be soon.")
destString += fmt.Sprintf("clienttls=%s\n", TLSSubmatches[1])
}
output := ";;; Defaults created from local pod/sac configuration:\n" + destString + "\n" + string(sampleModConf)
dest.WriteString(output)
return nil
}
func copy(src, dst string) (int64, error) {
// fmt.Println(src, dst)
sourceFileStat, e := os.Stat(src)
if e != nil {
return 0, e
}
if !sourceFileStat.Mode().IsRegular() {
return 0, fmt.Errorf("%s is not a regular file", src)
}
source, e := os.Open(src)
if e != nil {
return 0, e
}
defer source.Close()
destination, e := os.Create(dst)
if e != nil {
return 0, e
}
defer destination.Close()
nBytes, e := io.Copy(destination, source)
return nBytes, e
}
// supportedSubsystems returns a sorted slice of the supported subsystems for
// logging purposes.
func supportedSubsystems() []string {
// Convert the subsystemLoggers map keys to a slice.
subsystems := make([]string, 0, len(subsystemLoggers))
for subsysID := range subsystemLoggers {
subsystems = append(subsystems, subsysID)
}
// Sort the subsytems for stable display.
txsort.Strings(subsystems)
return subsystems
}
// parseAndSetDebugLevels attempts to parse the specified debug level and set
// the levels accordingly. An appropriate error is returned if anything is
// invalid.
func parseAndSetDebugLevels(debugLevel string) (e error) {
// When the specified string doesn't have any delimters, treat it as
// the log level for all subsystems.
if !strings.Contains(debugLevel, ",") && !strings.Contains(debugLevel, "=") {
// Validate debug log level.
if !validLogLevel(debugLevel) {
str := "The specified debug level [%v] is invalid"
return fmt.Errorf(str, debugLevel)
}
// Change the logging level for all subsystems.
setLogLevels(debugLevel)
return nil
}
// Split the specified string into subsystem/level pairs while detecting
// issues and update the log levels accordingly.
for _, logLevelPair := range strings.Split(debugLevel, ",") {
if !strings.Contains(logLevelPair, "=") {
str := "The specified debug level contains an invalid " +
"subsystem/level pair [%v]"
return fmt.Errorf(str, logLevelPair)
}
// Extract the specified subsystem and log level.
fields := strings.Split(logLevelPair, "=")
subsysID, logLevel := fields[0], fields[1]
// Validate subsystem.
if _, exists := subsystemLoggers[subsysID]; !exists {
str := "The specified subsystem [%v] is invalid -- " +
"supported subsytems %v"
return fmt.Errorf(str, subsysID, supportedSubsystems())
}
// Validate log level.
if !validLogLevel(logLevel) {
str := "The specified debug level [%v] is invalid"
return fmt.Errorf(str, logLevel)
}
setLogLevel(subsysID, logLevel)
}
return nil
}
// loadConfig initializes and parses the config using a config file and command
// line options.
//
// The configuration proceeds as follows:
// 1) Start with a default config with sane settings
// 2) Pre-parse the command line to check for an alternative config file
// 3) Load configuration file overwriting defaults with any specified options
// 4) Parse CLI options and overwrite/add any specified options
//
// The above results in btcwallet functioning properly without any config
// settings while still allowing the user to override settings with config files
// and command line options. Command line options always take precedence.
func loadConfig( cfg *Config) (*Config, []string, error) {
cfg = Config{
ConfigFile: DefaultConfigFile,
AppDataDir: DefaultAppDataDir,
LogDir: DefaultLogDir,
WalletPass: wallet.InsecurePubPassphrase,
CAFile: "",
RPCKey: DefaultRPCKeyFile,
RPCCert: DefaultRPCCertFile,
WalletRPCMaxClients: DefaultRPCMaxClients,
WalletRPCMaxWebsockets: DefaultRPCMaxWebsockets,
DataDir: DefaultAppDataDir,
// AddPeers: []string{},
// ConnectPeers: []string{},
}
// Pre-parse the command line options to see if an alternative config
// file or the version flag was specified.
preCfg := cfg
preParser := flags.NewParser(&preCfg, flags.Default)
_, e := preParser.Parse()
if e != nil {
if e, ok := e.(*flags.Error); !ok || e.Type != flags.ErrHelp {
preParser.WriteHelp(os.Stderr)
}
return nil, nil, e
}
// Show the version and exit if the version flag was specified.
funcName := "loadConfig"
appName := filepath.Base(os.Args[0])
appName = strings.TrimSuffix(appName, filepath.Ext(appName))
usageMessage := fmt.Sprintf("Use %s -h to show usage", appName)
if preCfg.ShowVersion {
fmt.Println(appName, "version", version())
os.Exit(0)
}
// Load additional config from file.
var configFileError error
parser := flags.NewParser(&cfg, flags.Default)
configFilePath := preCfg.ConfigFile.value
if preCfg.ConfigFile.ExplicitlySet() {
configFilePath = cleanAndExpandPath(configFilePath)
} else {
appDataDir := preCfg.AppDataDir.value
if !preCfg.AppDataDir.ExplicitlySet() && preCfg.DataDir.ExplicitlySet() {
appDataDir = cleanAndExpandPath(preCfg.DataDir.value)
}
if appDataDir != DefaultAppDataDir {
configFilePath = filepath.Join(appDataDir, DefaultConfigFilename)
}
}
e = flags.NewIniParser(parser).ParseFile(configFilePath)
if e != nil {
if _, ok := e.(*os.PathError); !ok {
fmt.Fprintln(os.Stderr, e)
parser.WriteHelp(os.Stderr)
return nil, nil, e
}
configFileError = e
}
// Parse command line options again to ensure they take precedence.
remainingArgs, e := parser.Parse()
if e != nil {
if e, ok := e.(*flags.Error); !ok || e.Type != flags.ErrHelp {
parser.WriteHelp(os.Stderr)
}
return nil, nil, e
}
// Chk deprecated aliases. The new options receive priority when both
// are changed from the default.
if cfg.DataDir.ExplicitlySet() {
fmt.Fprintln(os.Stderr, "datadir opt has been replaced by "+
"appdata -- please update your config")
if !cfg.AppDataDir.ExplicitlySet() {
cfg.AppDataDir.value = cfg.DataDir.value
}
}
// If an alternate data directory was specified, and paths with defaults
// relative to the data dir are unchanged, modify each path to be
// relative to the new data dir.
if cfg.AppDataDir.ExplicitlySet() {
cfg.AppDataDir.value = cleanAndExpandPath(cfg.AppDataDir.value)
if !cfg.RPCKey.ExplicitlySet() {
cfg.RPCKey.value = filepath.Join(cfg.AppDataDir.value, "rpc.key")
}
if !cfg.RPCCert.ExplicitlySet() {
cfg.RPCCert.value = filepath.Join(cfg.AppDataDir.value, "rpc.cert")
}
}
if _, e = os.Stat(cfg.DataDir.value); os.IsNotExist(e) {
// Create the destination directory if it does not exists
e = os.MkdirAll(cfg.DataDir.value, 0700)
if e != nil {
fmt.Println("ERROR", e)
return nil, nil, e
}
}
var generatedRPCPass, generatedRPCUser string
if _, e = os.Stat(cfg.ConfigFile.value); os.IsNotExist(e) {
// If we can find a pod.conf in the standard location, copy
// copy the rpcuser and rpcpassword and TLS setting
c := cleanAndExpandPath("~/.pod/pod.conf")
// fmt.Println("server config path:", c)
// _, e = os.Stat(c)
// fmt.Println(e)
// fmt.Println(os.IsNotExist(err))
if _, e = os.Stat(c); e == nil {
fmt.Println("Creating config from pod config")
createDefaultConfigFile(cfg.ConfigFile.value, c, cleanAndExpandPath("~/.pod"),
cfg.AppDataDir.value)
} else {
var bb bytes.Buffer
bb.Write(sampleModConf)
fmt.Println("Writing config file:", cfg.ConfigFile.value)
dest, e := os.OpenFile(cfg.ConfigFile.value,
os.O_RDWR|os.O_CREATE|os.O_TRUNC, 0600)
if e != nil {
fmt.Println("ERROR", e)
return nil, nil, e
}
defer dest.Close()
// We generate a random user and password
randomBytes := make([]byte, 20)
_, e = rand.Read(randomBytes)
if e != nil {
return nil, nil, e
}
generatedRPCUser = base64.StdEncoding.EncodeToString(randomBytes)
_, e = rand.Read(randomBytes)
if e != nil {
return nil, nil, e
}
generatedRPCPass = base64.StdEncoding.EncodeToString(randomBytes)
// We copy every line from the sample config file to the destination,
// only replacing the two lines for rpcuser and rpcpass
//
var line string
reader := bufio.NewReader(&bb)
for e != io.EOF {
line, e = reader.ReadString('\n')
if e != nil && e != io.EOF {
return nil, nil, e
}
if !strings.Contains(line, "podusername=") && !strings.Contains(line, "podpassword=") {
if strings.Contains(line, "username=") {
line = "username=" + generatedRPCUser + "\n"
} else if strings.Contains(line, "password=") {
line = "password=" + generatedRPCPass + "\n"
}
}
_, _ = generatedRPCPass, generatedRPCUser
if _, e = dest.WriteString(line); E.Chk(e) {
return nil, nil, e
}
}
}
}
// Choose the active network netparams based on the selected network.
// Multiple networks can't be selected simultaneously.
numNets := 0
if cfg.TestNet3 {
activeNet = &chaincfg.TestNet3Params
numNets++
}
if cfg.SimNet {
activeNet = &chaincfg.SimNetParams
numNets++
}
if numNets > 1 {
str := "%s: The testnet and simnet netparams can't be used " +
"together -- choose one"
e := fmt.Errorf(str, "loadConfig")
fmt.Fprintln(os.Stderr, e)
parser.WriteHelp(os.Stderr)
return nil, nil, e
}
// Append the network type to the log directory so it is "namespaced"
// per network.
cfg.LogDir = cleanAndExpandPath(cfg.LogDir)
cfg.LogDir = filepath.Join(cfg.LogDir, activeNet.Params.Name)
// Special show command to list supported subsystems and exit.
if cfg.DebugLevel == "show" {
fmt.Println("Supported subsystems", supportedSubsystems())
os.Exit(0)
}
// Initialize log rotation. After log rotation has been initialized, the
// logger variables may be used.
initLogRotator(filepath.Join(cfg.LogDir, DefaultLogFilename))
// Parse, validate, and set debug log level(s).
if e := parseAndSetDebugLevels(cfg.DebugLevel); E.Chk(e) {
e := fmt.Errorf("%s: %v", "loadConfig", e.Error())
fmt.Fprintln(os.Stderr, e)
parser.WriteHelp(os.Stderr)
return nil, nil, e
}
// Exit if you try to use a simulation wallet with a standard
// data directory.
if !(cfg.AppDataDir.ExplicitlySet() || cfg.DataDir.ExplicitlySet()) && cfg.CreateTemp {
fmt.Fprintln(os.Stderr, "Tried to create a temporary simulation "+
"wallet, but failed to specify data directory!")
os.Exit(0)
}
// Exit if you try to use a simulation wallet on anything other than
// simnet or testnet3.
if !cfg.SimNet && cfg.CreateTemp {
fmt.Fprintln(os.Stderr, "Tried to create a temporary simulation "+
"wallet for network other than simnet!")
os.Exit(0)
}
// Ensure the wallet exists or create it when the create flag is set.
netDir := NetworkDir(cfg.AppDataDir, ActiveNet.Params)
dbPath := filepath.Join(netDir, WalletDbName)
if cfg.CreateTemp && cfg.Create {
e := fmt.Errorf("The flags --create and --createtemp can not " +
"be specified together. Use --help for more information.")
fmt.Fprintln(os.Stderr, e)
return nil, nil, e
}
dbFileExists, e := cfgutil.FileExists(dbPath)
if e != nil {
log <- cl.Error{err}
return nil, nil, e
}
if cfg.CreateTemp {
tempWalletExists := false
if dbFileExists {
str := fmt.Sprintf("The wallet already exists. Loading this " +
"wallet instead.")
fmt.Fprintln(os.Out, str)
tempWalletExists = true
}
// Ensure the data directory for the network exists.
if e := checkCreateDir(netDir); E.Chk(e) {
fmt.Fprintln(os.Stderr, e)
return nil, nil, e
}
if !tempWalletExists {
// Perform the initial wallet creation wizard.
if e := createSimulationWallet(&cfg); E.Chk(e) {
fmt.Fprintln(os.Stderr, "Unable to create wallet:", e)
return nil, nil, e
}
}
} else if cfg.Create {
// Error if the create flag is set and the wallet already
// exists.
if dbFileExists {
e := fmt.Errorf("The wallet database file `%v` "+
"already exists.", dbPath)
fmt.Fprintln(os.Stderr, e)
return nil, nil, e
}
// Ensure the data directory for the network exists.
if e := checkCreateDir(netDir); E.Chk(e) {
fmt.Fprintln(os.Stderr, e)
return nil, nil, e
}
// Perform the initial wallet creation wizard.
if e := createWallet(&cfg); E.Chk(e) {
fmt.Fprintln(os.Stderr, "Unable to create wallet:", e)
return nil, nil, e
}
// Created successfully, so exit now with success.
os.Exit(0)
} else if !dbFileExists && !cfg.NoInitialLoad {
keystorePath := filepath.Join(netDir, keystore.Filename)
keystoreExists, e := cfgutil.FileExists(keystorePath)
if e != nil {
fmt.Fprintln(os.Stderr, e)
return nil, nil, e
}
if !keystoreExists {
// e = fmt.Errorf("The wallet does not exist. Run with the " +
// "--create opt to initialize and create it...")
// Ensure the data directory for the network exists.
fmt.Println("Existing wallet not found in", cfg.ConfigFile.value)
if e := checkCreateDir(netDir); E.Chk(e) {
fmt.Fprintln(os.Stderr, e)
return nil, nil, e
}
// Perform the initial wallet creation wizard.
if e := createWallet(&cfg); E.Chk(e) {
fmt.Fprintln(os.Stderr, "Unable to create wallet:", e)
return nil, nil, e
}
// Created successfully, so exit now with success.
os.Exit(0)
} else {
e = fmt.Errorf("The wallet is in legacy format. Run with the " +
"--create opt to import it.")
}
fmt.Fprintln(os.Stderr, e)
return nil, nil, e
}
// localhostListeners := map[string]struct{}{
// "localhost": {},
// "127.0.0.1": {},
// "::1": {},
// }
// if cfg.UseSPV {
// sac.MaxPeers = cfg.MaxPeers
// sac.BanDuration = cfg.BanDuration
// sac.BanThreshold = cfg.BanThreshold
// } else {
if cfg.RPCConnect == "" {
cfg.RPCConnect = net.JoinHostPort("localhost", activeNet.RPCClientPort)
}
// Add default port to connect flag if missing.
cfg.RPCConnect, e = cfgutil.NormalizeAddress(cfg.RPCConnect,
activeNet.RPCClientPort)
if e != nil {
fmt.Fprintf(os.Stderr,
"Invalid rpcconnect network address: %v\n", e)
return nil, nil, e
}
// RPCHost, _, e = net.SplitHostPort(cfg.RPCConnect)
// if e != nil {
// return nil, nil, e
// }
if cfg.EnableClientTLS {
// if _, ok := localhostListeners[RPCHost]; !ok {
// str := "%s: the --noclienttls opt may not be used " +
// "when connecting RPC to non localhost " +
// "addresses: %s"
// e := fmt.Errorf(str, funcName, cfg.RPCConnect)
// fmt.Fprintln(os.Stderr, e)
// fmt.Fprintln(os.Stderr, usageMessage)
// return nil, nil, e
// }
// } else {
// If CAFile is unset, choose either the copy or local pod cert.
if !cfg.CAFile.ExplicitlySet() {
cfg.CAFile.value = filepath.Join(cfg.AppDataDir.value, DefaultCAFilename)
// If the CA copy does not exist, check if we're connecting to
// a local pod and switch to its RPC cert if it exists.
certExists, e := cfgutil.FileExists(cfg.CAFile.value)
if e != nil {
fmt.Fprintln(os.Stderr, e)
return nil, nil, e
}
if !certExists {
// if _, ok := localhostListeners[RPCHost]; ok {
podCertExists, e := cfgutil.FileExists(
DefaultCAFile)
if e != nil {
fmt.Fprintln(os.Stderr, e)
return nil, nil, e
}
if podCertExists {
cfg.CAFile.value = DefaultCAFile
}
// }
}
}
}
// }
// Only set default RPC listeners when there are no listeners set for
// the experimental RPC server. This is required to prevent the old RPC
// server from sharing listen addresses, since it is impossible to
// remove defaults from go-flags slice options without assigning
// specific behavior to a particular string.
if len(cfg.ExperimentalRPCListeners) == 0 && len(cfg.WalletRPCListeners) == 0 {
addrs, e := net.LookupHost("localhost")
if e != nil {
return nil, nil, e
}
cfg.WalletRPCListeners = make([]string, 0, len(addrs))
for _, addr := range addrs {
addr = net.JoinHostPort(addr, activeNet.WalletRPCServerPort)
cfg.WalletRPCListeners = append(cfg.WalletRPCListeners, addr)
}
}
// Add default port to all rpc listener addresses if needed and remove
// duplicate addresses.
cfg.WalletRPCListeners, e = cfgutil.NormalizeAddresses(
cfg.WalletRPCListeners, activeNet.WalletRPCServerPort)
if e != nil {
fmt.Fprintf(os.Stderr,
"Invalid network address in legacy RPC listeners: %v\n", e)
return nil, nil, e
}
cfg.ExperimentalRPCListeners, e = cfgutil.NormalizeAddresses(
cfg.ExperimentalRPCListeners, activeNet.WalletRPCServerPort)
if e != nil {
fmt.Fprintf(os.Stderr,
"Invalid network address in RPC listeners: %v\n", e)
return nil, nil, e
}
// Both RPC servers may not listen on the same interface/port.
if len(cfg.WalletRPCListeners) > 0 && len(cfg.ExperimentalRPCListeners) > 0 {
seenAddresses := make(map[string]struct{}, len(cfg.WalletRPCListeners))
for _, addr := range cfg.WalletRPCListeners {
seenAddresses[addr] = struct{}{}
}
for _, addr := range cfg.ExperimentalRPCListeners {
_, seen := seenAddresses[addr]
if seen {
e := fmt.Errorf("Address `%s` may not be "+
"used as a listener address for both "+
"RPC servers", addr)
fmt.Fprintln(os.Stderr, e)
return nil, nil, e
}
}
}
// Only allow server TLS to be disabled if the RPC server is bound to
// localhost addresses.
if !cfg.EnableServerTLS {
allListeners := append(cfg.WalletRPCListeners,
cfg.ExperimentalRPCListeners...)
for _, addr := range allListeners {
if e != nil {
str := "%s: RPC listen interface '%s' is " +
"invalid: %v"
e := fmt.Errorf(str, funcName, addr, e)
fmt.Fprintln(os.Stderr, e)
fmt.Fprintln(os.Stderr, usageMessage)
return nil, nil, e
}
// host, _, e = net.SplitHostPort(addr)
// if _, ok := localhostListeners[host]; !ok {
// str := "%s: the --noservertls opt may not be used " +
// "when binding RPC to non localhost " +
// "addresses: %s"
// e := fmt.Errorf(str, funcName, addr)
// fmt.Fprintln(os.Stderr, e)
// fmt.Fprintln(os.Stderr, usageMessage)
// return nil, nil, e
// }
}
}
// Expand environment variable and leading ~ for filepaths.
cfg.CAFile.value = cleanAndExpandPath(cfg.CAFile.value)
cfg.RPCCert.value = cleanAndExpandPath(cfg.RPCCert.value)
cfg.RPCKey.value = cleanAndExpandPath(cfg.RPCKey.value)
// If the pod username or password are unset, use the same auth as for
// the client. The two settings were previously shared for pod and
// client auth, so this avoids breaking backwards compatibility while
// allowing users to use different auth settings for pod and wallet.
if cfg.PodUsername == "" {
cfg.PodUsername = cfg.Username
}
if cfg.PodPassword == "" {
cfg.PodPassword = cfg.Password
}
// Warn about missing config file after the final command line parse
// succeeds. This prevents the warning on help messages and invalid
// options.
if configFileError != nil {
Log.Warnf.Print("%v", configFileError)
}
return cfg, nil, nil
}
// validLogLevel returns whether or not logLevel is a valid debug log level.
func validLogLevel( logLevel string) bool {
switch logLevel {
case "trace":
fallthrough
case "debug":
fallthrough
case "info":
fallthrough
case "warn":
fallthrough
case "error":
fallthrough
case "critical":
return true
}
return false
}
*/

109
cmd/wallet/_signal.go_ Executable file
View File

@@ -0,0 +1,109 @@
package wallet
import (
"os"
"os/signal"
cl "github.com/p9c/p9/pkg/util/cl"
)
// interruptChannel is used to receive SIGINT (Ctrl+C) signals.
var interruptChannel chan os.Signal
// addHandlerChannel is used to add an interrupt handler to the list of handlers
// to be invoked on SIGINT (Ctrl+C) signals.
var addHandlerChannel = make(chan func())
// interruptHandlersDone is closed after all interrupt handlers run the first
// time an interrupt is signaled.
var interruptHandlersDone = make(qu.C)
var simulateInterruptChannel = make(qu.C, 1)
// signals defines the signals that are handled to do a clean shutdown.
// Conditional compilation is used to also include SIGTERM on Unix.
var signals = []os.Signal{os.Interrupt}
// simulateInterrupt requests invoking the clean termination process by an
// internal component instead of a SIGINT.
func simulateInterrupt( ) {
select {
case simulateInterruptChannel <- struct{}{}:
default:
}
}
// mainInterruptHandler listens for SIGINT (Ctrl+C) signals on the
// interruptChannel and invokes the registered interruptCallbacks accordingly.
// It also listens for callback registration. It must be run as a goroutine.
func mainInterruptHandler( ) {
// interruptCallbacks is a list of callbacks to invoke when a
// SIGINT (Ctrl+C) is received.
var interruptCallbacks []func()
invokeCallbacks := func() {
// run handlers in LIFO order.
for i := range interruptCallbacks {
idx := len(interruptCallbacks) - 1 - i
interruptCallbacks[idx]()
}
close(interruptHandlersDone)
}
for {
select {
case sig := <-interruptChannel:
log <- cl.Infof{
"received signal (%s) - shutting down...", sig,
}
_ = sig
invokeCallbacks()
return
case <-simulateInterruptChannel:
log <- cl.Inf(
"received shutdown request - shutting down...",
)
invokeCallbacks()
return
case handler := <-addHandlerChannel:
interruptCallbacks = append(interruptCallbacks, handler)
}
}
}
// addInterruptHandler adds a handler to call when a SIGINT (Ctrl+C) is
// received.
func addInterruptHandler( handler func()) {
// Create the channel and start the main interrupt handler which invokes
// all other callbacks and exits if not already done.
if interruptChannel == nil {
interruptChannel = make(chan os.Signal, 1)
signal.Notify(interruptChannel, signals...)
go mainInterruptHandler()
}
addHandlerChannel <- handler
}

12
cmd/wallet/_signalsigterm.go_ Executable file
View File

@@ -0,0 +1,12 @@
// +build darwin dragonfly freebsd linux netbsd openbsd solaris
package wallet
import (
"os"
"syscall"
)
func init( ) {
signals = []os.Signal{os.Interrupt, syscall.SIGTERM}
}

314
cmd/wallet/chainntfns.go Normal file
View File

@@ -0,0 +1,314 @@
package wallet
import (
"bytes"
"github.com/p9c/p9/pkg/btcaddr"
"strings"
"github.com/p9c/p9/pkg/chainclient"
"github.com/p9c/p9/pkg/txscript"
wm "github.com/p9c/p9/pkg/waddrmgr"
"github.com/p9c/p9/pkg/walletdb"
tm "github.com/p9c/p9/pkg/wtxmgr"
)
func (w *Wallet) handleChainNotifications() {
defer w.wg.Done()
if w == nil {
panic("w should not be nil")
}
chainClient, e := w.requireChainClient()
if e != nil {
E.Ln("handleChainNotifications called without RPC client", e)
return
}
sync := func(w *Wallet) {
if w.db != nil {
// At the moment there is no recourse if the rescan fails for some reason, however, the wallet will not be
// marked synced and many methods will error early since the wallet is known to be out of date.
e := w.syncWithChain()
if e != nil && !w.ShuttingDown() {
W.Ln("unable to synchronize wallet to chain:", e)
}
}
}
catchUpHashes := func(
w *Wallet, client chainclient.Interface,
height int32,
) (e error) {
// TODO(aakselrod): There's a race condition here, which happens when a reorg occurs between the rescanProgress
// notification and the last GetBlockHash call. The solution when using pod is to make pod send blockconnected
// notifications with each block the way Neutrino does, and get rid of the loop. The other alternative is to
// check the final hash and, if it doesn't match the original hash returned by the notification, to roll back
// and restart the rescan.
I.F(
"handleChainNotifications: catching up block hashes to height %d, this might take a while", height,
)
e = walletdb.Update(
w.db, func(tx walletdb.ReadWriteTx) (e error) {
ns := tx.ReadWriteBucket(waddrmgrNamespaceKey)
startBlock := w.Manager.SyncedTo()
for i := startBlock.Height + 1; i <= height; i++ {
hash, e := client.GetBlockHash(int64(i))
if e != nil {
return e
}
header, e := chainClient.GetBlockHeader(hash)
if e != nil {
return e
}
bs := wm.BlockStamp{
Height: i,
Hash: *hash,
Timestamp: header.Timestamp,
}
e = w.Manager.SetSyncedTo(ns, &bs)
if e != nil {
return e
}
}
return nil
},
)
if e != nil {
E.F(
"failed to update address manager sync state for height %d: %v",
height, e,
)
}
I.Ln("done catching up block hashes")
return e
}
for {
select {
case n, ok := <-chainClient.Notifications():
if !ok {
return
}
var notificationName string
var e error
switch n := n.(type) {
case chainclient.ClientConnected:
if w != nil {
go sync(w)
}
case chainclient.BlockConnected:
e = walletdb.Update(
w.db, func(tx walletdb.ReadWriteTx) (e error) {
return w.connectBlock(tx, tm.BlockMeta(n))
},
)
notificationName = "blockconnected"
case chainclient.BlockDisconnected:
e = walletdb.Update(
w.db, func(tx walletdb.ReadWriteTx) (e error) {
return w.disconnectBlock(tx, tm.BlockMeta(n))
},
)
notificationName = "blockdisconnected"
case chainclient.RelevantTx:
e = walletdb.Update(
w.db, func(tx walletdb.ReadWriteTx) (e error) {
return w.addRelevantTx(tx, n.TxRecord, n.Block)
},
)
notificationName = "recvtx/redeemingtx"
case chainclient.FilteredBlockConnected:
// Atomically update for the whole block.
if len(n.RelevantTxs) > 0 {
e = walletdb.Update(
w.db, func(
tx walletdb.ReadWriteTx,
) (e error) {
for _, rec := range n.RelevantTxs {
e = w.addRelevantTx(
tx, rec,
n.Block,
)
if e != nil {
return e
}
}
return nil
},
)
}
notificationName = "filteredblockconnected"
// The following require some database maintenance, but also need to be reported to the wallet's rescan
// goroutine.
case *chainclient.RescanProgress:
e = catchUpHashes(w, chainClient, n.Height)
notificationName = "rescanprogress"
select {
case w.rescanNotifications <- n:
case <-w.quitChan().Wait():
return
}
case *chainclient.RescanFinished:
e = catchUpHashes(w, chainClient, n.Height)
notificationName = "rescanprogress"
w.SetChainSynced(true)
select {
case w.rescanNotifications <- n:
case <-w.quitChan().Wait():
return
}
}
if e != nil {
// On out-of-sync blockconnected notifications, only send a debug message.
errStr := "failed to process consensus server " +
"notification (name: `%s`, detail: `%v`)"
if notificationName == "blockconnected" &&
strings.Contains(
e.Error(),
"couldn't get hash from database",
) {
D.F(errStr, notificationName, e)
} else {
E.F(errStr, notificationName, e)
}
}
case <-w.quit.Wait():
return
}
}
}
// connectBlock handles a chain server notification by marking a wallet that's currently in-sync with the chain server
// as being synced up to the passed block.
func (w *Wallet) connectBlock(dbtx walletdb.ReadWriteTx, b tm.BlockMeta) (e error) {
addrmgrNs := dbtx.ReadWriteBucket(waddrmgrNamespaceKey)
bs := wm.BlockStamp{
Height: b.Height,
Hash: b.Hash,
Timestamp: b.Time,
}
e = w.Manager.SetSyncedTo(addrmgrNs, &bs)
if e != nil {
return e
}
// Notify interested clients of the connected block.
//
// TODO: move all notifications outside of the database transaction.
w.NtfnServer.notifyAttachedBlock(dbtx, &b)
return nil
}
// disconnectBlock handles a chain server reorganize by rolling back all block history from the reorged block for a
// wallet in-sync with the chain server.
func (w *Wallet) disconnectBlock(dbtx walletdb.ReadWriteTx, b tm.BlockMeta) (e error) {
addrmgrNs := dbtx.ReadWriteBucket(waddrmgrNamespaceKey)
txmgrNs := dbtx.ReadWriteBucket(wtxmgrNamespaceKey)
if !w.ChainSynced() {
return nil
}
// Disconnect the removed block and all blocks after it if we know about the disconnected block. Otherwise, the
// block is in the future.
if b.Height <= w.Manager.SyncedTo().Height {
hash, e := w.Manager.BlockHash(addrmgrNs, b.Height)
if e != nil {
return e
}
if bytes.Equal(hash[:], b.Hash[:]) {
bs := wm.BlockStamp{
Height: b.Height - 1,
}
hash, e = w.Manager.BlockHash(addrmgrNs, bs.Height)
if e != nil {
return e
}
b.Hash = *hash
client := w.ChainClient()
header, e := client.GetBlockHeader(hash)
if e != nil {
return e
}
bs.Timestamp = header.Timestamp
e = w.Manager.SetSyncedTo(addrmgrNs, &bs)
if e != nil {
return e
}
e = w.TxStore.Rollback(txmgrNs, b.Height)
if e != nil {
return e
}
}
}
// Notify interested clients of the disconnected block.
w.NtfnServer.notifyDetachedBlock(&b.Hash)
return nil
}
func (w *Wallet) addRelevantTx(dbtx walletdb.ReadWriteTx, rec *tm.TxRecord, block *tm.BlockMeta) (e error) {
addrmgrNs := dbtx.ReadWriteBucket(waddrmgrNamespaceKey)
txmgrNs := dbtx.ReadWriteBucket(wtxmgrNamespaceKey)
// At the moment all notified transactions are assumed to actually be relevant. This assumption will not hold true
// when SPV support is added, but until then, simply insert the transaction because there should either be one or
// more relevant inputs or outputs.
e = w.TxStore.InsertTx(txmgrNs, rec, block)
if e != nil {
return e
}
// Chk every output to determine whether it is controlled by a wallet key. If so, mark the output as a credit.
for i, output := range rec.MsgTx.TxOut {
var addrs []btcaddr.Address
_, addrs, _, e = txscript.ExtractPkScriptAddrs(
output.PkScript,
w.chainParams,
)
if e != nil {
// Non-standard outputs are skipped.
continue
}
for _, addr := range addrs {
ma, e := w.Manager.Address(addrmgrNs, addr)
if e == nil {
// TODO: Credits should be added with the account they belong to, so tm is able to track per-account
// balances.
e = w.TxStore.AddCredit(
txmgrNs, rec, block, uint32(i),
ma.Internal(),
)
if e != nil {
return e
}
e = w.Manager.MarkUsed(addrmgrNs, addr)
if e != nil {
return e
}
T.Ln("marked address used:", addr)
continue
}
// Missing addresses are skipped. Other errors should be propagated.
if !wm.IsError(e, wm.ErrAddressNotFound) {
return e
}
}
}
// Send notification of mined or unmined transaction to any interested clients.
//
// TODO: Avoid the extra db hits.
if block == nil {
details, e := w.TxStore.UniqueTxDetails(txmgrNs, &rec.Hash, nil)
if e != nil {
E.Ln("cannot query transaction details for notification:", e)
}
// It's possible that the transaction was not found within the wallet's set of unconfirmed transactions due to
// it already being confirmed, so we'll avoid notifying it.
//
// TODO(wilmer): ideally we should find the culprit to why we're receiving an additional unconfirmed
// chain.RelevantTx notification from the chain backend.
if details != nil {
w.NtfnServer.notifyUnminedTransaction(dbtx, details)
}
} else {
details, e := w.TxStore.UniqueTxDetails(txmgrNs, &rec.Hash, &block.Block)
if e != nil {
E.Ln("cannot query transaction details for notification:", e)
}
// We'll only notify the transaction if it was found within the wallet's set of confirmed transactions.
if details != nil {
w.NtfnServer.notifyMinedTransaction(dbtx, details, block)
}
}
return nil
}

75
cmd/wallet/common.go Normal file
View File

@@ -0,0 +1,75 @@
package wallet
import (
"github.com/p9c/p9/pkg/amt"
"github.com/p9c/p9/pkg/btcaddr"
"time"
"github.com/p9c/p9/pkg/chainhash"
"github.com/p9c/p9/pkg/wire"
)
// Note: The following common types should never reference the Wallet type. Long term goal is to move these to their own
// package so that the database access APIs can create them directly for the wallet to return.
// BlockIdentity identifies a block, or the lack of one (used to describe an unmined transaction).
type BlockIdentity struct {
Hash chainhash.Hash
Height int32
}
// None returns whether there is no block described by the instance. When associated with a transaction, this indicates
// the transaction is unmined.
func (b *BlockIdentity) None() bool {
// BUG: Because dcrwallet uses both 0 and -1 in various places to refer to an unmined transaction this must check
// against both and may not ever be usable to represent the genesis block.
return *b == BlockIdentity{Height: -1} || *b == BlockIdentity{}
}
// OutputKind describes a kind of transaction output. This is used to differentiate between coinbase, stakebase, and
// normal outputs.
type OutputKind byte
// Defined OutputKind constants
const (
OutputKindNormal OutputKind = iota
OutputKindCoinbase
)
// TransactionOutput describes an output that was or is at least partially controlled by the wallet. Depending on
// context, this could refer to an unspent output, or a spent one.
type TransactionOutput struct {
OutPoint wire.OutPoint
Output wire.TxOut
OutputKind OutputKind
// These should be added later when the DB can return them more efficiently:
//
// TxLockTime uint32
// TxExpiry uint32
ContainingBlock BlockIdentity
ReceiveTime time.Time
}
// OutputRedeemer identifies the transaction input which redeems an output.
type OutputRedeemer struct {
TxHash chainhash.Hash
InputIndex uint32
}
// P2SHMultiSigOutput describes a transaction output with a pay-to-script-hash output script and an imported redemption
// script. Along with common details of the output, this structure also includes the P2SH address the script was created
// from and the number of signatures required to redeem it.
//
// TODO: Could be useful to return how many of the required signatures can be created by this wallet.
type P2SHMultiSigOutput struct {
// TODO: Add a TransactionOutput member to this struct and remove these fields which are duplicated by it. This
// improves consistency. Only not done now because wtxmgr APIs don't support an efficient way of fetching other
// Transactionoutput data together with the rest of the multisig info.
OutPoint wire.OutPoint
OutputAmount amt.Amount
ContainingBlock BlockIdentity
P2SHAddress *btcaddr.ScriptHash
RedeemScript []byte
M, N uint8 // M of N signatures required to redeem
Redeemer *OutputRedeemer // nil unless spent
}

9
cmd/wallet/config.go Normal file
View File

@@ -0,0 +1,9 @@
package wallet
// Options contains the required options for running the legacy RPC server.
type Options struct {
Username string
Password string
MaxPOSTClients int64
MaxWebsocketClients int64
}

242
cmd/wallet/createtx.go Normal file
View File

@@ -0,0 +1,242 @@
// Package wallet Copyright (c) 2015-2016 The btcsuite developers
package wallet
import (
"fmt"
"github.com/p9c/p9/pkg/amt"
"github.com/p9c/p9/pkg/btcaddr"
"github.com/p9c/p9/pkg/chainclient"
"sort"
ec "github.com/p9c/p9/pkg/ecc"
"github.com/p9c/p9/pkg/txauthor"
"github.com/p9c/p9/pkg/txscript"
"github.com/p9c/p9/pkg/waddrmgr"
"github.com/p9c/p9/pkg/walletdb"
"github.com/p9c/p9/pkg/wire"
"github.com/p9c/p9/pkg/wtxmgr"
)
// byAmount defines the methods needed to satisify sort.Interface to txsort credits by their output amount.
type byAmount []wtxmgr.Credit
func (s byAmount) Len() int { return len(s) }
func (s byAmount) Less(i, j int) bool { return s[i].Amount < s[j].Amount }
func (s byAmount) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
func makeInputSource(eligible []wtxmgr.Credit) txauthor.InputSource {
// Pick largest outputs first. This is only done for compatibility with previous
// tx creation code, not because it's a good idea.
sort.Sort(sort.Reverse(byAmount(eligible)))
// Current inputs and their total value. These are closed over by the returned
// input source and reused across multiple calls.
currentTotal := amt.Amount(0)
currentInputs := make([]*wire.TxIn, 0, len(eligible))
currentScripts := make([][]byte, 0, len(eligible))
currentInputValues := make([]amt.Amount, 0, len(eligible))
return func(target amt.Amount) (
amt.Amount, []*wire.TxIn,
[]amt.Amount, [][]byte, error,
) {
for currentTotal < target && len(eligible) != 0 {
nextCredit := &eligible[0]
eligible = eligible[1:]
nextInput := wire.NewTxIn(&nextCredit.OutPoint, nil, nil)
currentTotal += nextCredit.Amount
currentInputs = append(currentInputs, nextInput)
currentScripts = append(currentScripts, nextCredit.PkScript)
currentInputValues = append(currentInputValues, nextCredit.Amount)
}
return currentTotal, currentInputs, currentInputValues, currentScripts, nil
}
}
// secretSource is an implementation of txauthor.SecretSource for the wallet's address manager.
type secretSource struct {
*waddrmgr.Manager
addrmgrNs walletdb.ReadBucket
}
// GetKey gets the private key for an address if it is available
func (s secretSource) GetKey(addr btcaddr.Address) (privKey *ec.PrivateKey, cmpr bool, e error) {
var ma waddrmgr.ManagedAddress
ma, e = s.Address(s.addrmgrNs, addr)
if e != nil {
return nil, false, e
}
var ok bool
var mpka waddrmgr.ManagedPubKeyAddress
mpka, ok = ma.(waddrmgr.ManagedPubKeyAddress)
if !ok {
e = fmt.Errorf(
"managed address type for %v is `%T` but "+
"want waddrmgr.ManagedPubKeyAddress", addr, ma,
)
return nil, false, e
}
if privKey, e = mpka.PrivKey(); E.Chk(e) {
return nil, false, e
}
return privKey, ma.Compressed(), nil
}
// GetScript returns pay to script transaction
func (s secretSource) GetScript(addr btcaddr.Address) ([]byte, error) {
ma, e := s.Address(s.addrmgrNs, addr)
if e != nil {
return nil, e
}
msa, ok := ma.(waddrmgr.ManagedScriptAddress)
if !ok {
e := fmt.Errorf(
"managed address type for %v is `%T` but "+
"want waddrmgr.ManagedScriptAddress", addr, ma,
)
return nil, e
}
return msa.Script()
}
// txToOutputs creates a signed transaction which includes each output from
// outputs. Previous outputs to reedeem are chosen from the passed account's
// UTXO set and minconf policy. An additional output may be added to return
// change to the wallet. An appropriate fee is included based on the wallet's
// current relay fee. The wallet must be unlocked to create the transaction.
func (w *Wallet) txToOutputs(
outputs []*wire.TxOut, account uint32,
minconf int32, feeSatPerKb amt.Amount,
) (tx *txauthor.AuthoredTx, e error) {
var chainClient chainclient.Interface
if chainClient, e = w.requireChainClient(); E.Chk(e) {
return nil, e
}
e = walletdb.Update(
w.db, func(dbtx walletdb.ReadWriteTx) (e error) {
addrmgrNs := dbtx.ReadWriteBucket(waddrmgrNamespaceKey)
// Get current block's height and hash.
var bs *waddrmgr.BlockStamp
if bs, e = chainClient.BlockStamp(); E.Chk(e) {
return
}
var eligible []wtxmgr.Credit
if eligible, e = w.findEligibleOutputs(dbtx, account, minconf, bs); E.Chk(e) {
return
}
inputSource := makeInputSource(eligible)
changeSource := func() (b []byte, e error) {
// Derive the change output script. As a hack to allow spending from the
// imported account, change addresses are created from account 0.
var changeAddr btcaddr.Address
if account == waddrmgr.ImportedAddrAccount {
changeAddr, e = w.newChangeAddress(addrmgrNs, 0)
} else {
changeAddr, e = w.newChangeAddress(addrmgrNs, account)
}
if E.Chk(e) {
return
}
return txscript.PayToAddrScript(changeAddr)
}
if tx, e = txauthor.NewUnsignedTransaction(outputs, feeSatPerKb, inputSource, changeSource); E.Chk(e) {
return
}
// Randomize change position, if change exists, before signing. This doesn't
// affect the serialize size, so the change amount will still be valid.
if tx.ChangeIndex >= 0 {
tx.RandomizeChangePosition()
}
return tx.AddAllInputScripts(secretSource{w.Manager, addrmgrNs})
},
)
if E.Chk(e) {
return
}
if e = validateMsgTx(tx.Tx, tx.PrevScripts, tx.PrevInputValues); E.Chk(e) {
return
}
if tx.ChangeIndex >= 0 && account == waddrmgr.ImportedAddrAccount {
changeAmount := amt.Amount(tx.Tx.TxOut[tx.ChangeIndex].Value)
W.F(
"spend from imported account produced change: moving %v from imported account into default account.",
changeAmount,
)
}
return
}
func (w *Wallet) findEligibleOutputs(
dbtx walletdb.ReadTx,
account uint32,
minconf int32,
bs *waddrmgr.BlockStamp,
) ([]wtxmgr.Credit, error) {
addrmgrNs := dbtx.ReadBucket(waddrmgrNamespaceKey)
txmgrNs := dbtx.ReadBucket(wtxmgrNamespaceKey)
unspent, e := w.TxStore.UnspentOutputs(txmgrNs)
if e != nil {
return nil, e
}
// TODO: Eventually all of these filters (except perhaps output locking) should
// be handled by the call to UnspentOutputs (or similar). Because one of these
// filters requires matching the output script to the desired
// account, this change depends on making wtxmgr a waddrmgr dependancy and
// requesting unspent outputs for a single account.
eligible := make([]wtxmgr.Credit, 0, len(unspent))
for i := range unspent {
output := &unspent[i]
// Only include this output if it meets the required number of confirmations.
// Coinbase transactions must have have reached maturity before their outputs
// may be spent.
if !confirmed(minconf, output.Height, bs.Height) {
continue
}
if output.FromCoinBase {
target := int32(w.chainParams.CoinbaseMaturity)
if !confirmed(target, output.Height, bs.Height) {
continue
}
}
// Locked unspent outputs are skipped.
if w.LockedOutpoint(output.OutPoint) {
continue
}
// Only include the output if it is associated with the passed account.
//
// TODO: Handle multisig outputs by determining if enough of the addresses are
// controlled.
var addrs []btcaddr.Address
if _, addrs, _, e = txscript.ExtractPkScriptAddrs(
output.PkScript, w.chainParams,
); E.Chk(e) || len(addrs) != 1 {
continue
}
var addrAcct uint32
if _, addrAcct, e = w.Manager.AddrAccount(addrmgrNs, addrs[0]); E.Chk(e) ||
addrAcct != account {
continue
}
eligible = append(eligible, *output)
}
return eligible, nil
}
// validateMsgTx verifies transaction input scripts for tx. All previous output
// scripts from outputs redeemed by the transaction, in the same order they are
// spent, must be passed in the prevScripts slice.
func validateMsgTx(tx *wire.MsgTx, prevScripts [][]byte, inputValues []amt.Amount) (e error) {
hashCache := txscript.NewTxSigHashes(tx)
for i, prevScript := range prevScripts {
var vm *txscript.Engine
vm, e = txscript.NewEngine(
prevScript, tx, i,
txscript.StandardVerifyFlags, nil, hashCache, int64(inputValues[i]),
)
if E.Chk(e) {
return fmt.Errorf("cannot create script engine: %s", e)
}
e = vm.Execute()
if E.Chk(e) {
return fmt.Errorf("cannot validate transaction: %s", e)
}
}
return nil
}

26
cmd/wallet/disksync.go Normal file
View File

@@ -0,0 +1,26 @@
package wallet
import (
"fmt"
"os"
)
// checkCreateDir checks that the path exists and is a directory. If path does not exist, it is created.
func checkCreateDir(path string) (e error) {
var fi os.FileInfo
if fi, e = os.Stat(path); E.Chk(e) {
if os.IsNotExist(e) {
// Attempt data directory creation
if e = os.MkdirAll(path, 0700); E.Chk(e) {
return fmt.Errorf("cannot create directory: %s", e)
}
} else {
return fmt.Errorf("error checking directory: %s", e)
}
} else {
if !fi.IsDir() {
return fmt.Errorf("path '%s' is not a directory", path)
}
}
return nil
}

7
cmd/wallet/doc.go Normal file
View File

@@ -0,0 +1,7 @@
/*Package wallet provides ...
TODO: Flesh out this section
Overview
*/
package wallet

16
cmd/wallet/doc/README.md Normal file
View File

@@ -0,0 +1,16 @@
# RPC Documentation
This project provides a [gRPC](http://www.grpc.io/) server for Remote Procedure
Call (RPC) access from other processes. This is intended to be the primary means
by which users, through other client programs, interact with the wallet.
These documents cover the documentation for both consumers of the server and
developers who must make changes or additions to the API and server
implementation:
- [API specification](api.md)
- [Client usage](clientusage.md)
- [Making API changes](serverchanges.md)
A legacy RPC server based on the JSON-RPC API of Bitcoin Core's wallet is also
available, but documenting its usage is out of scope for these documents.

987
cmd/wallet/doc/api.md Normal file
View File

@@ -0,0 +1,987 @@
# RPC API Specification
Version: 2.0.1
=======
**Note:** This document assumes the reader is familiar with gRPC concepts. Refer
to
the [gRPC Concepts documentation](http://www.grpc.io/docs/guides/concepts.html)
for any unfamiliar terms.
**Note:** The naming style used for autogenerated identifiers may differ
depending on the language being used. This document follows the naming style
used by Google in their Protocol Buffers and gRPC documentation as well as this
project's `.proto` files. That is, CamelCase is used for services, methods, and
messages, lower_snake_case for message fields, and SCREAMING_SNAKE_CASE for
enums.
**Note:** The entierty of the RPC API is currently considered unstable and may
change anytime. Stability will be gradually added based on correctness,
perceived usefulness and ease-of-use over alternatives, and user feedback.
This document is the authoritative source on the RPC API's definitions and
semantics. Any divergence from this document is an implementation error. API
fixes and additions require a version increase according to the rules of
[Semantic Versioning 2.0.0](http://semver.org/).
Only optional proto3 message fields are used (the `required` keyword is never
used in the `.proto` file). If a message field must be set to something other
than the default value, or any other values are invalid, the error must occur in
the application's message handling. This prevents accidentally introducing
parsing errors if a previously optional field is missing or a new required field
is added.
Functionality is grouped into gRPC services. Depending on what functions are
currently callable, different services will be running. As an example, the
server may be running without a loaded wallet, in which case the Wallet service
is not running and the Loader service must be used to create a new or load an
existing wallet.
- [`VersionService`](#versionservice)
- [`LoaderService`](#loaderservice)
- [`WalletService`](#walletservice)
## `VersionService`
The `VersionService` service provides the caller with versioning information
regarding the RPC server. It has no dependencies and is always running.
**Methods:**
- [`Version`](#version)
### Methods
#### `Version`
The `Version` method returns the RPC server version. Versioning follows the
rules of Semantic Versioning (SemVer) 2.0.0.
**Request:** `VersionRequest`
**Response:** `VersionResponse`
- `string version_string`: The version encoded as a string.
- `uint32 major`: The SemVer major version number.
- `uint32 minor`: The SemVer minor version number.
- `uint32 patch`: The SemVer patch version number.
- `string prerelease`: The SemVer pre-release version identifier, if any.
- `string build_metadata`: Extra SemVer build metadata, if any.
**Expected errors:** None
**Stability:** Stable
## `LoaderService`
The `LoaderService` service provides the caller with functions related to the
management of the wallet and its connection to the Bitcoin network. It has no
dependencies and is always running.
**Methods:**
- [`WalletExists`](#walletexists)
- [`CreateWallet`](#createwallet)
- [`OpenWallet`](#openwallet)
- [`CloseWallet`](#closewallet)
- [`StartConsensusRPC`](#StartConsensusRPC)
**Shared messages:**
- [`BlockDetails`](#blockdetails)
- [`TransactionDetails`](#transactiondetails)
### Methods
#### `WalletExists`
The `WalletExists` method returns whether a file at the wallet database's file
path exists. Clients that must load wallets with this service are expected to
call this RPC to query whether `OpenWallet` should be used to open an existing
wallet, or `CreateWallet` to create a new wallet.
**Request:** `WalletExistsRequest`
**Response:** `WalletExistsResponse`
- `bool exists`: Whether the wallet file exists.
**Expected errors:** None
**Stability:** Unstable
___
#### `CreateWallet`
The `CreateWallet` method is used to create a wallet that is protected by two
levels of encryption: the public passphrase (for data that is made public on the
blockchain) and the private passphrase (for private keys). Since the seed is not
saved in the wallet database and clients should make their users backup the
seed, it needs to be passed as part of the request.
After creating a wallet, the `WalletService` service begins running.
**Request:** `CreateWalletRequest`
- `bytes public_passphrase`: The passphrase used for the outer wallet
encryption. This passphrase protects data that is made public on the
blockchain. If this passphrase has zero length, an insecure default is used
instead.
- `bytes private_passphrase`: The passphrase used for the inner wallet
encryption. This is the passphrase used for data that must always remain
private, such as private keys. The length of this field must not be zero.
- `bytes seed`: The BIP0032 seed used to derive all wallet keys. The length of
this field must be between 16 and 64 bytes, inclusive.
**Response:** `CreateWalletReponse`
**Expected errors:**
- `FailedPrecondition`: The wallet is currently open.
- `AlreadyExists`: A file already exists at the wallet database file path.
- `InvalidArgument`: A private passphrase was not included in the request, or
the seed is of incorrect length.
**Stability:** Unstable: There needs to be a way to recover all keys and
transactions of a wallet being recovered by its seed. It is unclear whether it
should be part of this method or a `WalletService` method.
___
#### `OpenWallet`
The `OpenWallet` method is used to open an existing wallet database. If the
wallet is protected by a public passphrase, it can not be successfully opened if
the public passphrase parameter is missing or incorrect.
After opening a wallet, the `WalletService` service begins running.
**Request:** `OpenWalletRequest`
- `bytes public_passphrase`: The passphrase used for the outer wallet
encryption. This passphrase protects data that is made public on the
blockchain. If this passphrase has zero length, an insecure default is used
instead.
**Response:** `OpenWalletResponse`
**Expected errors:**
- `FailedPrecondition`: The wallet is currently open.
- `NotFound`: The wallet database file does not exist.
- `InvalidArgument`: The public encryption passphrase was missing or incorrect.
**Stability:** Unstable
___
#### `CloseWallet`
The `CloseWallet` method is used to cleanly stop all wallet operations on a
loaded wallet and close the database. After closing, the `WalletService`
service will remain running but any operations that require the database will be
unusable.
**Request:** `CloseWalletRequest`
**Response:** `CloseWalletResponse`
**Expected errors:**
- `FailedPrecondition`: The wallet is not currently open.
**Stability:** Unstable: It would be preferable to stop the `WalletService`
after closing, but there does not appear to be any way to do so currently. It
may also be a good idea to limit under what conditions a wallet can be closed,
such as only closing wallets loaded by `LoaderService` and/or using a secret to
authenticate the operation.
___
#### `StartConsensusRPC`
The `StartConsensusRPC` method is used to provide clients the ability to
dynamically start the pod RPC client. This RPC client is used for wallet syncing
and publishing transactions to the Bitcoin network.
**Request:** `StartConsensusRPCRequest`
- `string network_address`: The host/IP and optional port of the RPC server to
connect to. IP addresses may be IPv4 or IPv6. If the port is missing, a
default port is chosen corresponding to the default pod RPC port of the active
Bitcoin network.
- `string username`: The RPC username required to authenticate to the RPC
server.
- `bytes password`: The RPC password required to authenticate to the RPC server.
- `bytes certificate`: The consensus RPC server's TLS certificate. If this field
has zero length and the network address describes a loopback connection
(`localhost`, `127.0.0.1`, or `::1`) TLS will be disabled.
**Response:** `StartConsensusRPCResponse`
**Expected errors:**
- `FailedPrecondition`: A consensus RPC client is already active.
- `InvalidArgument`: The network address is ill-formatted or does not contain a
valid IP address.
- `NotFound`: The consensus RPC server is unreachable. This condition may not
return `Unavailable` as that refers to `LoaderService` itself being
unavailable.
- `InvalidArgument`: The username, password, or certificate are invalid. This
condition may not be return `Unauthenticated` as that refers to the client not
having the credentials to call this method.
**Stability:** Unstable: It is unknown if the consensus RPC client will remain
used after the project gains SPV support.
## `WalletService`
The WalletService service provides RPCs for the wallet itself. The service
depends on a loaded wallet and does not run when the wallet has not been created
or opened yet.
The service provides the following methods:
- [`Ping`](#ping)
- [`Network`](#network)
- [`AccountNumber`](#accountnumber)
- [`Accounts`](#accounts)
- [`Balance`](#balance)
- [`GetTransactions`](#gettransactions)
- [`ChangePassphrase`](#changepassphrase)
- [`RenameAccount`](#renameaccount)
- [`NextAccount`](#nextaccount)
- [`NextAddress`](#nextaddress)
- [`ImportPrivateKey`](#importprivatekey)
- [`FundTransaction`](#fundtransaction)
- [`SignTransaction`](#signtransaction)
- [`PublishTransaction`](#publishtransaction)
- [`TransactionNotifications`](#transactionnotifications)
- [`SpentnessNotifications`](#spentnessnotifications)
- [`AccountNotifications`](#accountnotifications)
#### `Ping`
The `Ping` method checks whether the service is active.
**Request:** `PingRequest`
**Response:** `PingResponse`
**Expected errors:** None
**Stability:** Unstable: This may be moved to another service as it does not
depend on the wallet.
___
#### `Network`
The `Network` method returns the network identifier constant describing the
server's active network.
**Request:** `NetworkRequest`
**Response:** `NetworkResponse`
- `uint32 active_network`: The network identifier.
**Expected errors:** None
**Stability:** Unstable: This may be moved to another service as it does not
depend on the wallet.
___
#### `AccountNumber`
The `AccountNumber` method looks up a BIP0044 account number by an account's
unique name.
**Request:** `AccountNumberRequest`
- `string account_name`: The name of the account being queried.
**Response:** `AccountNumberResponse`
- `uint32 account_number`: The BIP0044 account number.
**Expected errors:**
- `Aborted`: The wallet database is closed.
- `NotFound`: No accounts exist by the name in the request.
**Stability:** Unstable
___
#### `Accounts`
The `Accounts` method returns the current properties of all accounts managed in
the wallet.
**Request:** `AccountsRequest`
**Response:** `AccountsResponse`
- `repeated Account accounts`: Account properties grouped into `Account` nested
message types, one per account, ordered by increasing account numbers.
**Nested message:** `Account`
- `uint32 account_number`: The BIP0044 account number.
- `string account_name`: The name of the account.
- `int64 total_balance`: The total (zero-conf and immature) balance, counted
in Satoshis.
- `uint32 external_key_count`: The number of derived keys in the external
key chain.
- `uint32 internal_key_count`: The number of derived keys in the internal
key chain.
- `uint32 imported_key_count`: The number of imported keys.
- `bytes current_block_hash`: The hash of the block wallet is considered to be
synced with.
- `int32 current_block_height`: The height of the block wallet is considered to
be synced with.
**Expected errors:**
- `Aborted`: The wallet database is closed.
**Stability:** Unstable
___
#### `Balance`
The `Balance` method queries the wallet for an account's balance. Balances are
returned as combination of total, spendable (by consensus and request policy),
and unspendable immature coinbase balances.
**Request:** `BalanceRequest`
- `uint32 account_number`: The account number to query.
- `int32 required_confirmations`: The number of confirmations required before an
unspent transaction output's value is included in the spendable balance. This
may not be negative.
**Response:** `BalanceResponse`
- `int64 total`: The total (zero-conf and immature) balance, counted in
Satoshis.
- `int64 spendable`: The spendable balance, given some number of required
confirmations, counted in Satoshis. This equals the total balance when the
required number of confirmations is zero and there are no immature coinbase
outputs.
- `int64 immature_reward`: The total value of all immature coinbase outputs,
counted in Satoshis.
**Expected errors:**
- `InvalidArgument`: The required number of confirmations is negative.
- `Aborted`: The wallet database is closed.
- `NotFound`: The account does not exist.
**Stability:** Unstable: It may prove useful to modify this RPC to query
multiple accounts together.
___
#### `GetTransactions`
The `GetTransactions` method queries the wallet for relevant transactions. The
query set may be specified using a block range, inclusive, with the heights or
hashes of the minimum and maximum block. Transaction results are grouped grouped
by the block they are mined in, or grouped together with other unmined
transactions.
**Request:** `GetTransactionsRequest`
- `bytes starting_block_hash`: The block hash of the block to begin including
transactions from. If this field is set to the default, the
`starting_block_height` field is used instead. If changed, the byte array must
have length 32 and `starting_block_height` must be zero.
- `sint32 starting_block_height`: The block height to begin including
transactions from. If this field is non-zero, `starting_block_hash` must be
set to its default value to avoid ambiguity. If positive, the field is
interpreted as a block height. If negative, the height is subtracted from the
block wallet considers itself in sync with.
- `bytes ending_block_hash`: The block hash of the last block to include
transactions from. If this default is set to the default, the
`ending_block_height` field is used instead. If changed, the byte array must
have length 32 and `ending_block_height` must be zero.
- `int32 ending_block_height`: The block height of the last block to include
transactions from. If non-zero, the `ending_block_hash` field must be set to
its default value to avoid ambiguity. If both this field and
`ending_block_hash` are set to their default values, no upper block limit is
used and transactions through the best block and all unmined transactions are
included.
**Response:** `GetTransactionsResponse`
- `repeated BlockDetails mined_transactions`: All mined transactions, organized
by blocks in the order they appear in the blockchain.
The `BlockDetails` message is used by other methods and is documented
[here](#blockdetails).
- `repeated TransactionDetails unmined_transactions`: All unmined transactions.
The ordering is unspecified.
The `TransactionDetails` message is used by other methods and is documented
[here](#transactiondetails).
**Expected errors:**
- `InvalidArgument`: A non-default block hash field did not have the correct
length.
- `Aborted`: The wallet database is closed.
- `NotFound`: A block, specified by its height or hash, is unknown to the
wallet.
**Stability:** Unstable
- There is currently no way to get only unmined transactions due to the way the
block range is specified.
- It would be useful to ignore the block range and return some minimum number of
the most recent transaction, but it is unclear if that should be added to this
method's request object, or to make a new method.
- A specified ordering (such as dependency order) for all returned unmined
transactions would be useful.
___
#### `ChangePassphrase`
The `ChangePassphrase` method requests a change to either the public (outer) or
private (inner) encryption passphrases.
**Request:** `ChangePassphraseRequest`
- `Key key`: The key being changed.
**Nested enum:** `Key`
- `PRIVATE`: The request specifies to change the private (inner) encryption
passphrase.
- `PUBLIC`: The request specifies to change the public (outer) encryption
passphrase.
- `bytes old_passphrase`: The current passphrase for the encryption key. This is
the value being modified. If the public passphrase is being modified and this
value is the default value, an insecure default is used instead.
- `bytes new_passphrase`: The replacement passphrase. This field may only have
zero length if the public passphrase is being changed, in which case an
insecure default will be used instead.
**Response:** `ChangePassphraseResponse`
**Expected errors:**
- `InvalidArgument`: A zero length passphrase was specified when changing the
private passphrase, or the old passphrase was incorrect.
- `Aborted`: The wallet database is closed.
**Stability:** Unstable
___
#### `RenameAccount`
The `RenameAccount` method requests a change to an account's name property.
**Request:** `RenameAccountRequest`
- `uint32 account_number`: The number of the account being modified.
- `string new_name`: The new name for the account.
**Response:** `RenameAccountResponse`
**Expected errors:**
- `Aborted`: The wallet database is closed.
- `InvalidArgument`: The new account name is a reserved name.
- `NotFound`: The account does not exist.
- `AlreadyExists`: An account by the same name already exists.
**Stability:** Unstable: There should be a way to specify a starting block or
time to begin the rescan at. Additionally, since the client is expected to be
able to do asynchronous RPC, it may be useful for the response to block on the
rescan finishing before returning.
___
#### `NextAccount`
The `NextAccount` method generates the next BIP0044 account for the wallet.
**Request:** `NextAccountRequest`
- `bytes passphrase`: The private passphrase required to derive the next
account's key.
- `string account_name`: The name to give the new account.
**Response:** `NextAccountResponse`
- `uint32 account_number`: The number of the newly-created account.
**Expected errors:**
- `Aborted`: The wallet database is closed.
- `InvalidArgument`: The private passphrase is incorrect.
- `InvalidArgument`: The new account name is a reserved name.
- `AlreadyExists`: An account by the same name already exists.
**Stability:** Unstable
___
#### `NextAddress`
The `NextAddress` method generates the next deterministic address for the
wallet.
**Request:** `NextAddressRequest`
- `uint32 account`: The number of the account to derive the next address for.
- `Kind kind`: The type of address to generate.
**Nested enum:** `Kind`
- `BIP0044_EXTERNAL`: The request specifies to generate the next address for
the account's BIP0044 external key chain.
- `BIP0044_INTERNAL`: The request specifies to generate the next address for
the account's BIP0044 internal key chain.
**Response:** `NextAddressResponse`
- `string address`: The payment address string.
**Expected errors:**
- `Aborted`: The wallet database is closed.
- `NotFound`: The account does not exist.
**Stability:** Unstable
___
#### `ImportPrivateKey`
The `ImportPrivateKey` method imports a private key in Wallet Import Format
(WIF) encoding to a wallet account. A rescan may optionally be started to search
for transactions involving the private key's associated payment address.
**Request:** `ImportPrivateKeyRequest`
- `bytes passphrase`: The wallet's private passphrase.
- `uint32 account`: The account number to associate the imported key with.
- `string private_key_wif`: The private key, encoded using WIF.
- `bool rescan`: Whether or not to perform a blockchain rescan for the imported
key.
**Response:** `ImportPrivateKeyResponse`
**Expected errors:**
- `InvalidArgument`: The private key WIF string is not a valid WIF encoding.
- `Aborted`: The wallet database is closed.
- `InvalidArgument`: The private passphrase is incorrect.
- `NotFound`: The account does not exist.
**Stability:** Unstable
___
#### `FundTransaction`
The `FundTransaction` method queries the wallet for unspent transaction outputs
controlled by some account. Results may be refined by setting a target output
amount and limiting the required confirmations. The selection algorithm is
unspecified.
Output results are always created even if a minimum target output amount could
not be reached. This allows this method to behave similar to the `Balance`
method while also including the outputs that make up that balance.
Change outputs can optionally be returned by this method as well. This can
provide the caller with everything necessary to construct an unsigned
transaction paying to already known addresses or scripts.
**Request:** `FundTransactionRequest`
- `uint32 account`: Account number containing the keys controlling the output
set to query.
- `int64 target_amount`: If positive, the service may limit output results to
those that sum to at least this amount (counted in Satoshis). If zero, all
outputs not excluded by other arguments are returned. This may not be
negative.
- `int32 required_confirmations`: The minimum number of block confirmations
needed to consider including an output in the return set. This may not be
negative.
- `bool include_immature_coinbases`: If true, immature coinbase outputs will
also be included.
- `bool include_change_script`: If true, a change script is included in the
response object.
**Response:** `FundTransactionResponse`
- `repeated PreviousOutput selected_outputs`: The output set returned as a list
of `PreviousOutput` nested message objects.
**Nested message:** `PreviousOutput`
- `bytes transaction_hash`: The hash of the transaction this output
originates from.
- `uint32 output_index`: The output index of the transaction this output
originates from.
- `int64 amount`: The output value (counted in Satoshis) of the unspent
transaction output.
- `bytes pk_script`: The output script of the unspent transaction output.
- `int64 receive_time`: The earliest Unix time the wallet became aware of
the transaction containing this output.
- `bool from_coinbase`: Whether the output is a coinbase output.
- `int64 total_amount`: The sum of all returned output amounts. This may be less
than a positive target amount if there were not enough eligible outputs
available.
- `bytes change_pk_script`: A transaction output script used to pay the
remaining amount to a newly-generated change address for the account. This is
null if `include_change_script` was false or the target amount was not
exceeded.
**Expected errors:**
- `InvalidArgument`: The target amount is negative.
- `InvalidArgument`: The required confirmations is negative.
- `Aborted`: The wallet database is closed.
- `NotFound`: The account does not exist.
**Stability:** Unstable
___
#### `SignTransaction`
The `SignTransaction` method adds transaction input signatures to a serialized
transaction using a wallet private keys.
**Request:** `SignTransactionRequest`
- `bytes passphrase`: The wallet's private passphrase.
- `bytes serialized_transaction`: The transaction to add input signatures to.
- `repeated uint32 input_indexes`: The input indexes that signature scripts must
be created for. If there are no indexes, input scripts are created for every
input that is missing an input script.
**Response:** `SignTransactionResponse`
- `bytes transaction`: The serialized transaction with added input scripts.
- `repeated uint32 unsigned_input_indexes`: The indexes of every input that an
input script could not be created for.
**Expected errors:**
- `InvalidArgument`: The serialized transaction can not be decoded.
- `Aborted`: The wallet database is closed.
- `InvalidArgument`: The private passphrase is incorrect.
**Stability:** Unstable: It is unclear if the request should include an account,
and only secrets of that account are used when creating input scripts. It's also
missing options similar to Core's signrawtransaction, such as the sighash flags
and additional keys.
___
#### `PublishTransaction`
The `PublishTransaction` method publishes a signed, serialized transaction to
the Bitcoin network. If the transaction spends any of the wallet's unspent
outputs or creates a new output controlled by the wallet, it is saved by the
wallet and republished later if it or a double spend are not mined.
**Request:** `PublishTransactionRequest`
- `bytes signed_transaction`: The signed transaction to publish.
**Response:** `PublishTransactionResponse`
**Expected errors:**
- `InvalidArgument`: The serialized transaction can not be decoded or is missing
input scripts.
- `Aborted`: The wallet database is closed.
**Stability:** Unstable
___
#### `TransactionNotifications`
The `TransactionNotifications` method returns a stream of notifications
regarding changes to the blockchain and transactions relevant to the wallet.
**Request:** `TransactionNotificationsRequest`
**Response:** `stream TransactionNotificationsResponse`
- `repeated BlockDetails attached_blocks`: A list of blocks attached to the main
chain, sorted by increasing height. All newly mined transactions are included
in these messages, in the message corresponding to the block that contains
them. If this field has zero length, the notification is due to an unmined
transaction being added to the wallet.
The `BlockDetails` message is used by other methods and is documented
[here](#blockdetails).
- `repeated bytes detached_blocks`: The hashes of every block that was
reorganized out of the main chain. These are sorted by heights in decreasing
order (newest blocks first).
- `repeated TransactionDetails unmined_transactions`: All newly added unmined
transactions. When relevant transactions are reorganized out and not included
in (or double-spent by) the new chain, they are included here.
The `TransactionDetails` message is used by other methods and is documented
[here](#transactiondetails).
- `repeated bytes unmined_transaction_hashes`: The hashes of every
currently-unmined transaction. This differs from the `unmined_transactions`
field by including every unmined transaction, rather than those newly added to
the unmined set.
**Expected errors:**
- `Aborted`: The wallet database is closed.
**Stability:** Unstable: This method could use a better name.
___
#### `SpentnessNotifications`
The `SpentnessNotifications` method returns a stream of notifications regarding
the spending of unspent outputs and/or the discovery of new unspent outputs for
an account.
**Request:** `SpentnessNotificationsRequest`
- `uint32 account`: The account to create notifications for.
- `bool no_notify_unspent`: If true, do not send any notifications for
newly-discovered unspent outputs controlled by the account.
- `bool no_notify_spent`: If true, do not send any notifications for newly-spent
transactions controlled by the account.
**Response:** `stream SpentnessNotificationsResponse`
- `bytes transaction_hash`: The hash of the serialized transaction containing
the output being reported.
- `uint32 output_index`: The output index of the output being reported.
- `Spender spender`: If null, the output is a newly-discovered unspent output.
If not null, the message records the transaction input that spends the
previously-unspent output.
**Nested message:** `Spender`
- `bytes transaction_hash`: The hash of the serialized transaction that
spends the reported output.
- `uint32 input_index`: The index of the input that spends the reported
output.
**Expected errors:**
- `InvalidArgument`: The `no_notify_unspent` and `no_notify_spent` request
fields are both true.
- `Aborted`: The wallet database is closed.
**Stability:** Unstable
___
#### `AccountNotifications`
The `AccountNotifications` method returns a stream of notifications for account
property changes, such as name and key counts.
**Request:** `AccountNotificationsRequest`
**Response:** `stream AccountNotificationsResponse`
- `uint32 account_number`: The BIP0044 account being reported.
- `string account_name`: The current account name.
- `uint32 external_key_count`: The current number of BIP0032 external keys
derived for the account.
- `uint32 internal_key_count`: The current number of BIP0032 internal keys
derived for the account.
- `uint32 imported_key_count`: The current number of private keys imported into
the account.
**Expected errors:**
- `Aborted`: The wallet database is closed.
**Stability:** Unstable: This should probably share a message with the
`Accounts` method.
___
### Shared messages
The following messages are used by multiple methods. To avoid unnecessary
duplication, they are documented once here.
#### `BlockDetails`
The `BlockDetails` message is included in responses to report a block and the
wallet's relevant transactions contained therein.
- `bytes hash`: The hash of the block being reported.
- `int32 height`: The height of the block being reported.
- `int64 timestamp`: The Unix time included in the block header.
- `repeated TransactionDetails transactions`: All transactions relevant to the
wallet that are mined in this block. Transactions are sorted by their block
index in increasing order.
The `TransactionDetails` message is used by other methods and is documented
[here](#transactiondetails).
**Stability**: Unstable: This should probably include the block version.
___
#### `TransactionDetails`
The `TransactionDetails` message is included in responses to report transactions
relevant to the wallet. The message includes details such as which previous
wallet inputs are spent by this transaction, whether each output is controlled
by the wallet or not, the total fee (if calculable), and the earlist time the
transaction was seen.
- `bytes hash`: The hash of the serialized transaction.
- `bytes transaction`: The serialized transaction.
- `repeated Input debits`: Properties for every previously-unspent wallet output
spent by this transaction.
**Nested message:** `Input`
- `uint32 index`: The transaction input index of the input being reported.
- `uint32 previous_account`: The account that controlled the now-spent
output.
- `int64 previous_amount`: The previous output value.
- `repeated Output credits`: Properties for every output controlled by the
wallet.
**Nested message:** `Output`
- `uint32 index`: The transaction output index of the output being reported.
- `uint32 account`: The account number of the controlled output.
- `bool internal`: Whether the output pays to an address derived from the
account's internal key series. This often means the output is a change
output.
- `int64 fee`: The transaction fee, if calculable. The fee is only calculable
when every previous output spent by this transaction is also recorded by
wallet. Otherwise, this field is zero.
- `int64 timestamp`: The Unix time of the earliest time this transaction was
seen.
**Stability**: Unstable: Since the caller is expected to decode the serialized
transaction, and would have access to every output script, the output properties
could be changed to only include outputs controlled by the wallet.

480
cmd/wallet/doc/clientusage.md Executable file
View File

@@ -0,0 +1,480 @@
# Client usage
Clients use RPC to interact with the wallet. A client may be implemented in any
language directly supported by [gRPC](http://www.grpc.io/), languages capable of
performing [FFI](https://en.wikipedia.org/wiki/Foreign_function_interface) with
these, and languages that share a common runtime (e.g. Scala, Kotlin, and Ceylon
for the JVM, F# for the CLR, etc.). Exact instructions differ slightly depending
on the language being used, but the general process is the same for each. In
short summary, to call RPC server methods, a client must:
1. Generate client bindings specific for the [wallet RPC server API](api.md)
2. Import or include the gRPC dependency
3. (Optional) Wrap the client bindings with application-specific types
4. Open a gRPC channel using the wallet server's self-signed TLS certificate
The only exception to these steps is if the client is being written in Go. In
that case, the first step may be omitted by importing the bindings from
btcwallet itself.
The rest of this document provides short examples of how to quickly get started
by implementing a basic client that fetches the balance of the default account
(account 0) from a testnet3 wallet listening on `localhost:18332` in several
different languages:
- [Go](#go)
- [C++](#cpp)
- [C#](#csharp)
- [Node.js](#nodejs)
- [Python](#python)
Unless otherwise stated under the language example, it is assumed that gRPC is
already already installed. The gRPC installation procedure can vary greatly
depending on the operating system being used and whether a gRPC source install
is required. Follow
the [gRPC install instructions](https://github.com/grpc/grpc/blob/master/INSTALL)
if gRPC is not already installed. A full gRPC install also includes
[Protocol Buffers](https://github.com/google/protobuf) (compiled with support
for the proto3 language version), which contains the protoc tool and language
plugins used to compile this project's `.proto`
files to language-specific bindings.
## Go
The native gRPC library (gRPC Core) is not required for Go clients (a pure Go
implementation is used instead) and no additional setup is required to generate
Go bindings.
```Go
package main
import (
"fmt"
"path/filepath"
log "github.com/p9c/p9/pkg/logi"
pb "git.parallelcoin.io/mod/rpc/walletrpc"
"golang.org/x/net/context"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials"
"github.com/btcsuite/btcutil"
)
var certificateFile = filepath.Join(btcutil.AppDataDir("mod", false),
"rpc.cert"
)
func main() {
creds, e := credentials.NewClientTLSFromFile(certificateFile, "localhost")
if e != nil {
L.
return
}
conn, e := grpc.Dial("localhost:18332",
grpc.WithTransportCredentials(creds)
)
if e != nil {
L.
return
}
defer conn.Close()
c := pb.NewWalletServiceClient(conn)
balanceRequest := &pb.BalanceRequest{
AccountNumber: 0,
RequiredConfirmations: 1,
}
balanceResponse, e := c.Balance(context.Background(), balanceRequest)
if e != nil {
L.
return
}
log.Println("Spendable balance: ", btcutil.Amount(balanceResponse
.Spendable))
}
```
<a name="cpp"/>
## C++
**Note:** Protocol Buffers and gRPC require at least C++11. The example client
is written using C++14.
**Note:** The following instructions assume the client is being written on a
Unix-like platform (with instructions using the `sh` shell and Unix-isms in the
example source code) with a source gRPC install in `/usr/local`.
First, generate the C++ language bindings by compiling the `.proto`:
```bash
$ protoc -I/path/to/btcwallet/rpc --cpp_out=. --grpc_out=. \
--plugin=protoc-genopts-grpc=$(which grpc_cpp_plugin) \
/path/to/btcwallet/rpc/api.proto
```
Once the `.proto` file has been compiled, the example client can be completed.
Note that the following code uses synchronous calls which will block the main
thread on all gRPC IO.
```C++
// example.cc
#include <unistd.h>
#include <sys/types.h>
#include <pwd.h>
#include <fstream>
#include <iostream>
#include <sstream>
#include <string>
#include <grpc++/grpc++.h>
#include "api.grpc.pb.h"
using namespace std::string_literals;
struct NoHomeDirectoryException : std::exception {
char const* what() const noexcept override {
return "Failed to lookup home directory";
}
};
auto read_file(std::string const& file_path) -> std::string {
std::ifstream in{file_path};
std::stringstream ss{};
ss << in.rdbuf();
return ss.str();
}
auto main() -> int {
// Before the gRPC native library (gRPC Core) is lazily loaded and
// initialized, an environment variable must be set so BoringSSL is
// configured to use ECDSA TLS certificates (required by btcwallet).
setenv("GRPC_SSL_CIPHER_SUITES", "HIGH+ECDSA", 1);
// Note: This path is operating system-dependent. This can be created
// portably using boost::filesystem or the experimental filesystem class
// expected to ship in C++17.
auto wallet_tls_cert_file = []{
auto pw = getpwuid(getuid());
if (pw == nullptr || pw->pw_dir == nullptr) {
throw NoHomeDirectoryException{};
}
return pw->pw_dir + "/.btcwallet/rpc.cert"s;
}();
grpc::SslCredentialsOptions cred_options{
.pem_root_certs = read_file(wallet_tls_cert_file),
};
auto creds = grpc::SslCredentials(cred_options);
auto channel = grpc::CreateChannel("localhost:18332", creds);
auto stub = walletrpc::WalletService::NewStub(channel);
grpc::ClientContext context{};
walletrpc::BalanceRequest request{};
request.set_account_number(0);
request.set_required_confirmations(1);
walletrpc::BalanceResponse response{};
auto status = stub->Balance(&context, request, &response);
if (!status.ok()) {
std::cout << status.error_message() << std::endl;
} else {
std::cout << "Spendable balance: " << response.spendable() << " Satoshis" << std::endl;
}
}
```
The example can then be built with the following commands:
```bash
$ c++ -std=c++14 -I/usr/local/include -pthread -c -o api.pb.o api.pb.cc
$ c++ -std=c++14 -I/usr/local/include -pthread -c -o api.grpc.pb.o api.grpc.pb.cc
$ c++ -std=c++14 -I/usr/local/include -pthread -c -o example.o example.cc
$ c++ *.o -L/usr/local/lib -lgrpc++ -lgrpc -lgpr -lprotobuf -lpthread -ldl -o example
```
<a name="csharp"/>
## C&#35;
The quickest way of generating client bindings in a Windows .NET environment is
by using the protoc binary included in the gRPC NuGet package. From the NuGet
package manager PowerShell console, this can be performed with:
```
PM> Install-Package Grpc
```
The protoc and C# plugin binaries can then be found in the packages directory.
For example, `.\packages\Google.Protobuf.x.x.x\tools\protoc.exe` and
`.\packages\Grpc.Tools.x.x.x\tools\grpc_csharp_plugin.exe`.
When writing a client on other platforms (e.g. Mono on OS X), or when doing a
full gRPC source install on Windows, protoc and the C# plugin must be installed
by other means. Consult
the [official documentation](https://github.com/grpc/grpc/blob/master/src/csharp/README.md)
for these steps.
Once protoc and the C# plugin have been obtained, client bindings can be
generated. The following command generates the files `Api.cs` and `ApiGrpc.cs`
in the `Example` project directory using the `Walletrpc` namespace:
```PowerShell
PS> & protoc.exe -I \Path\To\btcwallet\rpc --csharp_out=Example --grpc_out=Example `
--plugin=protoc-gen-grpc=\Path\To\grpc_csharp_plugin.exe `
\Path\To\btcwallet\rpc\api.proto
```
Once references have been added to the project for the `Google.Protobuf` and
`Grpc.Core` assemblies, the example client can be implemented.
```C#
using Grpc.Core;
using System;
using System.IO;
using System.Text;
using System.Threading.Tasks;
using Walletrpc;
namespace Example
{
static class Program
{
static void Main(string[] args)
{
ExampleAsync().Wait();
}
static async Task ExampleAsync()
{
// Before the gRPC native library (gRPC Core) is lazily loaded and initialized,
// an environment variable must be set so BoringSSL is configured to use ECDSA TLS
// certificates (required by btcwallet).
Environment.SetEnvironmentVariable("GRPC_SSL_CIPHER_SUITES", "HIGH+ECDSA");
var walletAppData = Portability.LocalAppData(Environment.OSVersion.Platform, "mod");
var walletTlsCertFile = Path.Combine(walletAppData, "rpc.cert");
var cert = await FileUtils.ReadFileAsync(walletTlsCertFile);
var channel = new Channel("localhost:18332", new SslCredentials(cert));
try
{
var c = WalletService.NewClient(channel);
var balanceRequest = new BalanceRequest
{
AccountNumber = 0,
RequiredConfirmations = 1,
};
var balanceResponse = await c.BalanceAsync(balanceRequest);
Console.WriteLine($"Spendable balance: {balanceResponse.Spendable} Satoshis");
}
finally
{
await channel.ShutdownAsync();
}
}
}
static class FileUtils
{
public static async Task<string> ReadFileAsync(string filePath)
{
using (var r = new StreamReader(filePath, Encoding.UTF8))
{
return await r.ReadToEndAsync();
}
}
}
static class Portability
{
public static string LocalAppData(PlatformID platform, string processName)
{
if (processName == null)
throw new ArgumentNullException(nameof(processName));
if (processName.Length == 0)
throw new ArgumentException(nameof(processName) + " may not have zero length");
switch (platform)
{
case PlatformID.Win32NT:
return Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.LocalApplicationData),
ToUpper(processName));
case PlatformID.MacOSX:
return Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.Personal),
"Library", "Application Support", ToUpper(processName));
case PlatformID.Unix:
return Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.Personal),
ToDotLower(processName));
default:
throw new PlatformNotSupportedException($"PlatformID={platform}");
}
}
private static string ToUpper(string value)
{
var firstChar = value[0];
if (char.IsUpper(firstChar))
return value;
else
return char.ToUpper(firstChar) + value.Substring(1);
}
private static string ToDotLower(string value)
{
var firstChar = value[0];
return "." + char.ToLower(firstChar) + value.Substring(1);
}
}
}
```
## Node.js
First, install gRPC (either by building the latest source release, or by
installing a gRPC binary development package through your operating system's
package manager). This is required to install the npm module as it wraps the
native C library (gRPC Core) with C++ bindings. Installing
the [grpc module](https://www.npmjs.com/package/grpc) to your project can then
be done by executing:
```
npm install grpc
```
A Node.js client does not require generating JavaScript stub files for the
wallet's API from the `.proto`. Instead, a call to `grpc.load`
with the `.proto` file path dynamically loads the Protobuf descriptor and
generates bindings for each service. Either copy the `.proto` to the client
project directory, or reference the file from the
`btcwallet` project directory.
```JavaScript
// Before the gRPC native library (gRPC Core) is lazily loaded and
// initialized, an environment variable must be set so BoringSSL is
// configured to use ECDSA TLS certificates (required by btcwallet).
process.env['GRPC_SSL_CIPHER_SUITES'] = 'HIGH+ECDSA';
var fs = require('fs');
var path = require('path');
var os = require('os');
var grpc = require('grpc');
var protoDescriptor = grpc.load('./api.proto');
var walletrpc = protoDescriptor.walletrpc;
var certPath = path.join(process.env.HOME, '.btcwallet', 'rpc.cert');
if (os.platform == 'win32') {
certPath = path.join(process.env.LOCALAPPDATA, 'Btcwallet', 'rpc.cert');
} else if (os.platform == 'darwin') {
certPath = path.join(process.env.HOME, 'Library', 'Application Support',
'Btcwallet', 'rpc.cert');
}
var cert = fs.readFileSync(certPath);
var creds = grpc.credentials.createSsl(cert);
var client = new walletrpc.WalletService('localhost:18332', creds);
var request = {
account_number: 0,
required_confirmations: 1
};
client.balance(request, function (err, response) {
if (err) {
console.err.Ln(;
} else {
console.log('Spendable balance:', response.spendable, 'Satoshis');
}
});
```
## Python
**Note:** gRPC requires Python 2.7.
After installing gRPC Core and Python development headers, `pip`
should be used to install the `grpc` module and its dependencies. Full
instructions for this procedure can be found
[here](https://github.com/grpc/grpc/blob/master/src/python/README.md).
Generate Python stubs from the `.proto`:
```bash
$ protoc -I /path/to/btcsuite/btcwallet/rpc --python_out=. --grpc_out=. \
--plugin=protoc-genopts-grpc=$(which grpc_python_plugin) \
/path/to/btcwallet/rpc/api.proto
```
Implement the client:
```Python
import os
import platform
from grpc.beta import implementations
import api_pb2 as walletrpc
timeout = 1 # seconds
def main():
# Before the gRPC native library (gRPC Core) is lazily loaded and
# initialized, an environment variable must be set so BoringSSL is
# configured to use ECDSA TLS certificates (required by btcwallet).
os.environ['GRPC_SSL_CIPHER_SUITES'] = 'HIGH+ECDSA'
cert_file_path = os.path.join(os.environ['HOME'], '.btcwallet', 'rpc.cert')
if platform.system() == 'Windows':
cert_file_path = os.path.join(os.environ['LOCALAPPDATA'], "mod", "rpc.cert")
elif platform.system() == 'Darwin':
cert_file_path = os.path.join(os.environ['HOME'], 'Library', 'Application Support',
'Btcwallet', 'rpc.cert')
with open(cert_file_path, 'r') as f:
cert = f.read()
creds = implementations.ssl_client_credentials(cert, None, None)
channel = implementations.secure_channel('localhost', 18332, creds)
stub = walletrpc.beta_create_WalletService_stub(channel)
request = walletrpc.BalanceRequest(account_number = 0, required_confirmations = 1)
response = stub.Balance(request, timeout)
print 'Spendable balance: %d Satoshis' % response.spendable
if __name__ == '__main__':
main()
```

View File

@@ -0,0 +1,95 @@
# Making API Changes
This document describes the process of how btcwallet developers must make
changes to the RPC API and server. Due to the use of gRPC and Protocol Buffers
for the RPC implementation, changes to this API require extra dependencies and
steps before changes to the server can be implemented.
## Requirements
- The Protocol Buffer compiler `protoc` installed with support for the `proto3`
language
The `protoc` tool is part of the Protocol Buffers project. This can be
installed [from source](https://github.com/google/protobuf/blob/master/INSTALL.txt)
, from
an [official binary release](https://github.com/google/protobuf/releases), or
through an operating system's package manager.
- The gRPC `protoc` plugin for Go
This plugin is written in Go and can be installed using `go get`:
```
go get github.com/golang/protobuf/protoc-gen-go
```
- Knowledge of Protocol Buffers version 3 (proto3)
Note that a full installation of gRPC Core is not required, and only the
`protoc` compiler and Go plugins are necessary. This is due to the project using
a pure Go gRPC implementation instead of wrapping the C library from gRPC Core.
## Step 1: Modify the `.proto`
Once the developer dependencies have been met, changes can be made to the API by
modifying the Protocol Buffers descriptor file [`api.proto`](../api.proto_).
The API is versioned according to the rules
of [Semantic Versioning 2.0](http://semver.org/). After any changes, bump the
API version in the [API specification](api.md) and add the changes to the spec.
Unless backwards compatibility is broken (and the version is bumped to represent
this change), message fields must never be removed or changed, and new fields
must always be appended.
It is forbidden to use the `required` attribute on a message field as this can
cause errors during parsing when the new API is used by an older client.
Instead, the (implicit) optional attribute is used, and the server
implementation must return an appropriate error if the new request field is not
set to a valid value.
## Step 2: Compile the `.proto`
Once changes to the descriptor file and API specification have been made, the
`protoc` compiler must be used to compile the descriptor into a Go package. This
code contains interfaces (stubs) for each service (to be implemented by the
wallet) and message types used for each RPC. This same code can also be imported
by a Go client that then calls same interface methods to perform RPC with the
wallet.
By committing the autogenerated package to the project repo, the `proto3`
compiler and plugin are not needed by users installing the project by source or
by other developers not making changes to the RPC API.
A `sh` shell script is included to compile the Protocol Buffers descriptor. It
must be run from the `rpc` directory.
```bash
$ sh regen.sh
```
If a `sh` shell is unavailable, the command can be run manually instead (again
from the `rpc` directory).
```
protoc -I. api.proto --go_out=plugins=grpc:walletrpc
```
TODO(jrick): This step could be simplified and be more portable by putting the
commands in a Go source file and executing them with `go generate`. It should,
however, only be run when API changes are performed (not
with `go generate ./...` in the project root) since not all developers are
expected to have
`protoc` installed.
## Step 3: Implement the API change in the RPC server
After the Go code for the API has been regenated, the necessary changes can be
implemented in the [`rpcserver`](../rpcserver/) package.
## Additional Resources
- [Protocol Buffers Language Guide (proto3)](https://developers.google.com/protocol-buffers/docs/proto3)
- [Protocol Buffers Basics: Go](https://developers.google.com/protocol-buffers/docs/gotutorial)
- [gRPC Basics: Go](http://www.grpc.io/docs/tutorials/basic/go.html)

3
cmd/wallet/docs/README.md Executable file
View File

@@ -0,0 +1,3 @@
### Guides
[Rebuilding all transaction history with forced rescans](https://github.com/p9c/p9/walletmain/tree/master/docs/force_rescans.md)

View File

@@ -0,0 +1,87 @@
package wallet
import (
"encoding/binary"
"path/filepath"
"github.com/p9c/p9/pkg/walletdb"
"github.com/p9c/p9/pkg/wtxmgr"
"github.com/p9c/p9/pod/config"
)
func DropWalletHistory(w *Wallet, cfg *config.Config) (e error) {
var (
// Namespace keys.
syncBucketName = []byte("sync")
waddrmgrNamespace = []byte("waddrmgr")
wtxmgrNamespace = []byte("wtxmgr")
// Sync related key names (sync bucket).
syncedToName = []byte("syncedto")
startBlockName = []byte("startblock")
recentBlocksName = []byte("recentblocks")
)
dbPath := filepath.Join(
cfg.DataDir.V(),
cfg.Network.V(), "wallet.db",
)
// I.Ln("dbPath", dbPath)
var db walletdb.DB
db, e = walletdb.Open("bdb", dbPath)
if E.Chk(e) {
// DBError("failed to open database:", err)
return e
}
defer db.Close()
D.Ln("dropping wtxmgr namespace")
e = walletdb.Update(
db, func(tx walletdb.ReadWriteTx) (e error) {
D.Ln("deleting top level bucket")
if e = tx.DeleteTopLevelBucket(wtxmgrNamespace); E.Chk(e) {
}
if e != nil && e != walletdb.ErrBucketNotFound {
return e
}
var ns walletdb.ReadWriteBucket
D.Ln("creating new top level bucket")
if ns, e = tx.CreateTopLevelBucket(wtxmgrNamespace); E.Chk(e) {
return e
}
if e = wtxmgr.Create(ns); E.Chk(e) {
return e
}
ns = tx.ReadWriteBucket(waddrmgrNamespace).NestedReadWriteBucket(syncBucketName)
startBlock := ns.Get(startBlockName)
D.Ln("putting start block", startBlock)
if e = ns.Put(syncedToName, startBlock); E.Chk(e) {
return e
}
recentBlocks := make([]byte, 40)
copy(recentBlocks[0:4], startBlock[0:4])
copy(recentBlocks[8:], startBlock[4:])
binary.LittleEndian.PutUint32(recentBlocks[4:8], uint32(1))
defer D.Ln("put recent blocks")
return ns.Put(recentBlocksName, recentBlocks)
},
)
if E.Chk(e) {
return e
}
D.Ln("updated wallet")
// if w != nil {
// // Rescan chain to ensure balance is correctly regenerated
// job := &wallet.RescanJob{
// InitialSync: true,
// }
// // Submit rescan job and log when the import has completed.
// // Do not block on finishing the rescan. The rescan success
// // or failure is logged elsewhere, and the channel is not
// // required to be read, so discard the return value.
// errC := w.SubmitRescan(job)
// select {
// case e := <-errC:
// DB // // case <-time.After(time.Second * 5):
// // break
// }
// }
return e
}

69
cmd/wallet/errors.go Normal file
View File

@@ -0,0 +1,69 @@
package wallet
import (
"errors"
"github.com/p9c/p9/pkg/btcjson"
)
// TODO(jrick): There are several error paths which 'replace' various errors
// with a more appropiate error from the json package. Create a map of
// these replacements so they can be handled once after an RPC handler has
// returned and before the error is marshaled.
//
// BTCJSONError types to simplify the reporting of specific categories of
// errors, and their *json.RPCError creation.
type (
// DeserializationError describes a failed deserializaion due to bad user input. It corresponds to
// json.ErrRPCDeserialization.
DeserializationError struct {
error
}
// InvalidParameterError describes an invalid parameter passed by the user. It corresponds to
// json.ErrRPCInvalidParameter.
InvalidParameterError struct {
error
}
// ParseError describes a failed parse due to bad user input. It corresponds to json.ErrRPCParse.
ParseError struct {
error
}
)
// Errors variables that are defined once here to avoid duplication below.
var (
ErrNeedPositiveAmount = InvalidParameterError{
errors.New("amount must be positive"),
}
ErrNeedPositiveMinconf = InvalidParameterError{
errors.New("minconf must be positive"),
}
ErrAddressNotInWallet = btcjson.RPCError{
Code: btcjson.ErrRPCWallet,
Message: "address not found in wallet",
}
ErrAccountNameNotFound = btcjson.RPCError{
Code: btcjson.ErrRPCWalletInvalidAccountName,
Message: "account name not found",
}
ErrUnloadedWallet = btcjson.RPCError{
Code: btcjson.ErrRPCWallet,
Message: "Request requires a wallet but wallet has not loaded yet",
}
ErrWalletUnlockNeeded = btcjson.RPCError{
Code: btcjson.ErrRPCWalletUnlockNeeded,
Message: "Enter the wallet passphrase with walletpassphrase first",
}
ErrNotImportedAccount = btcjson.RPCError{
Code: btcjson.ErrRPCWallet,
Message: "imported addresses must belong to the imported account",
}
ErrNoTransactionInfo = btcjson.RPCError{
Code: btcjson.ErrRPCNoTxInfo,
Message: "No information for transaction",
}
ErrReservedAccountName = btcjson.RPCError{
Code: btcjson.ErrRPCInvalidParameter,
Message: "Account name is reserved by RPC server",
}
)

517
cmd/wallet/genapi/genapi.go Normal file
View File

@@ -0,0 +1,517 @@
package main
import (
"os"
"sort"
"text/template"
"github.com/p9c/p9/pkg/log"
)
type handler struct {
Method, Handler, HandlerWithChain, Cmd, ResType string
}
type handlersT []handler
func (h handlersT) Len() int {
return len(h)
}
func (h handlersT) Less(i, j int) bool {
return h[i].Method < h[j].Method
}
func (h handlersT) Swap(i, j int) {
h[i], h[j] = h[j], h[i]
}
func main() {
log.SetLogLevel("trace")
if fd, e := os.Create("rpchandlers.go"); E.Chk(e) {
} else {
defer fd.Close()
t := template.Must(template.New("noderpc").Parse(NodeRPCHandlerTpl))
sort.Sort(handlers)
if e = t.Execute(fd, handlers); E.Chk(e) {
}
}
}
const (
RPCMapName = "RPCHandlers"
Worker = "CAPI"
)
var NodeRPCHandlerTpl = `// generated by go run ./genapi/.; DO NOT EDIT
//
`+`//go:generate go run ./genapi/.
package wallet
import (
"io"
"net/rpc"
"time"
"github.com/p9c/p9/pkg/qu"
"github.com/p9c/p9/pkg/btcjson"
"github.com/p9c/p9/pkg/chainclient"
)
// API stores the channel, parameters and result values from calls via the channel
type API struct {
Ch interface{}
Params interface{}
Result interface{}
}
// CAPI is the central structure for configuration and access to a net/rpc API access endpoint for this RPC API
type CAPI struct {
Timeout time.Duration
quit qu.C
}
// NewCAPI returns a new CAPI
func NewCAPI(quit qu.C, timeout ...time.Duration) (c *CAPI) {
c = &CAPI{quit: quit}
if len(timeout)>0 {
c.Timeout = timeout[0]
} else {
c.Timeout = time.Second * 5
}
return
}
// CAPIClient is a wrapper around RPC calls
type CAPIClient struct {
*rpc.Client
}
// NewCAPIClient creates a new client for a kopach_worker. Note that any kind of connection can be used here,
// other than the StdConn
func NewCAPIClient(conn io.ReadWriteCloser) *CAPIClient {
return &CAPIClient{rpc.NewClient(conn)}
}
type (
// None means no parameters it is not checked so it can be nil
None struct{} {{range .}}
// {{.Handler}}Res is the result from a call to {{.Handler}}
{{.Handler}}Res struct { Res *{{.ResType}}; e error }{{end}}
)
// RequestHandler is a handler function to handle an unmarshaled and parsed request into a marshalable response. If the
// error is a *json.RPCError or any of the above special error classes, the server will respond with the JSON-RPC
// appropriate error code. All other errors use the wallet catch-all error code, json.ErrRPCWallet.
type RequestHandler func(interface{}, *Wallet,
...*chainclient.RPCClient) (interface{}, error)
// ` + RPCMapName + ` is all of the RPC calls available
//
// - Handler is the handler function
//
// - Call is a channel carrying a struct containing parameters and error that is listened to in RunAPI to dispatch the
// calls
//
// - Result is a bundle of command parameters and a channel that the result will be sent back on
//
// Get and save the Result function's return, and you can then call the call functions check, result and wait functions
// for asynchronous and synchronous calls to RPC functions
var ` + RPCMapName + ` = map[string]struct {
Handler RequestHandler
// Function variables cannot be compared against anything but nil, so use a boolean to record whether help
// generation is necessary. This is used by the tests to ensure that help can be generated for every implemented
// method.
//
// A single map and this bool is here is used rather than several maps for the unimplemented handlers so every
// method has exactly one handler function.
//
// The Return field returns a new channel of the type returned by this function. This makes it possible to use this
// for callers to receive a response in the cpc library which implements the functions as channel pipes
NoHelp bool
Call chan API
Params interface{}
Result func() API
}{
{{range .}} "{{.Method}}":{
Handler: {{.Handler}}, Call: make(chan API, 32),
Result: func() API { return API{Ch: make(chan {{.Handler}}Res)} }},
{{end}}
}
// API functions
//
// The functions here provide access to the RPC through a convenient set of functions generated for each call in the RPC
// API to request, check for, access the results and wait on results
{{range .}}
// {{.Handler}} calls the method with the given parameters
func (a API) {{.Handler}}(cmd {{.Cmd}}) (e error) {
` + RPCMapName + `["{{.Method}}"].Call <- API{a.Ch, cmd, nil}
return
}
// {{.Handler}}Check checks if a new message arrived on the result channel and returns true if it does, as well as
// storing the value in the Result field
func (a API) {{.Handler}}Check() (isNew bool) {
select {
case o := <- a.Ch.(chan {{.Handler}}Res):
if o.e != nil {
a.Result = o.e
} else {
a.Result = o.Res
}
isNew = true
default:
}
return
}
// {{.Handler}}GetRes returns a pointer to the value in the Result field
func (a API) {{.Handler}}GetRes() (out *{{.ResType}}, e error) {
out, _ = a.Result.(*{{.ResType}})
e, _ = a.Result.(error)
return
}
// {{.Handler}}Wait calls the method and blocks until it returns or 5 seconds passes
func (a API) {{.Handler}}Wait(cmd {{.Cmd}}) (out *{{.ResType}}, e error) {
` + RPCMapName + `["{{.Method}}"].Call <- API{a.Ch, cmd, nil}
select {
case <-time.After(time.Second*5):
break
case o := <- a.Ch.(chan {{.Handler}}Res):
out, e = o.Res, o.e
}
return
}
{{end}}
// RunAPI starts up the api handler server that receives rpc.API messages and runs the handler and returns the result
// Note that the parameters are type asserted to prevent the consumer of the API from sending wrong message types not
// because it's necessary since they are interfaces end to end
func RunAPI(chainRPC *chainclient.RPCClient, wallet *Wallet,
quit qu.C) {
nrh := ` + RPCMapName + `
go func() {
D.Ln("starting up wallet cAPI")
var e error
var res interface{}
for {
select { {{range .}}
case msg := <-nrh["{{.Method}}"].Call:
if res, e = nrh["{{.Method}}"].
Handler(msg.Params.({{.Cmd}}), wallet,
chainRPC); E.Chk(e) {
}
if r, ok := res.({{.ResType}}); ok {
msg.Ch.(chan {{.Handler}}Res) <- {{.Handler}}Res{&r, e} } {{end}}
case <-quit.Wait():
D.Ln("stopping wallet cAPI")
return
}
}
}()
}
// RPC API functions to use with net/rpc
{{range .}}
func (c *CAPI) {{.Handler}}(req {{.Cmd}}, resp {{.ResType}}) (e error) {
nrh := ` + RPCMapName + `
res := nrh["{{.Method}}"].Result()
res.Params = req
nrh["{{.Method}}"].Call <- res
select {
case resp = <-res.Ch.(chan {{.ResType}}):
case <-time.After(c.Timeout):
case <-c.quit.Wait():
}
return
}
{{end}}
// Client call wrappers for a CAPI client with a given Conn
{{range .}}
func (r *CAPIClient) {{.Handler}}(cmd ...{{.Cmd}}) (res {{.ResType}}, e error) {
var c {{.Cmd}}
if len(cmd) > 0 {
c = cmd[0]
}
if e = r.Call("` + Worker + `.{{.Handler}}", c, &res); E.Chk(e) {
}
return
}
{{end}}
`
var handlers = handlersT{
{
Method: "addmultisigaddress",
Handler: "AddMultiSigAddress",
Cmd: "*btcjson.AddMultisigAddressCmd",
ResType: "string",
},
{
Method: "createmultisig",
Handler: "CreateMultiSig",
Cmd: "*btcjson.CreateMultisigCmd",
ResType: "btcjson.CreateMultiSigResult",
},
{
Method: "dumpprivkey",
Handler: "DumpPrivKey",
Cmd: "*btcjson.DumpPrivKeyCmd",
ResType: "string",
},
{
Method: "getaccount",
Handler: "GetAccount",
Cmd: "*btcjson.GetAccountCmd",
ResType: "string",
},
{
Method: "getaccountaddress",
Handler: "GetAccountAddress",
Cmd: "*btcjson.GetAccountAddressCmd",
ResType: "string",
},
{
Method: "getaddressesbyaccount",
Handler: "GetAddressesByAccount",
Cmd: "*btcjson.GetAddressesByAccountCmd",
ResType: "[]string",
},
{
Method: "getbalance",
Handler: "GetBalance",
Cmd: "*btcjson.GetBalanceCmd",
ResType: "float64",
},
{
Method: "getbestblockhash",
Handler: "GetBestBlockHash",
Cmd: "*None",
ResType: "string",
},
{
Method: "getblockcount",
Handler: "GetBlockCount",
Cmd: "*None",
ResType: "int32",
},
{
Method: "getinfo",
Handler: "GetInfo",
Cmd: "*None",
ResType: "btcjson.InfoWalletResult",
},
{
Method: "getnewaddress",
Handler: "GetNewAddress",
Cmd: "*btcjson.GetNewAddressCmd",
ResType: "string",
},
{
Method: "getrawchangeaddress",
Handler: "GetRawChangeAddress",
Cmd: "*btcjson.GetRawChangeAddressCmd",
ResType: "string",
},
{
Method: "getreceivedbyaccount",
Handler: "GetReceivedByAccount",
Cmd: "*btcjson.GetReceivedByAccountCmd",
ResType: "float64",
},
{
Method: "getreceivedbyaddress",
Handler: "GetReceivedByAddress",
Cmd: "*btcjson.GetReceivedByAddressCmd",
ResType: "float64",
},
{
Method: "gettransaction",
Handler: "GetTransaction",
Cmd: "*btcjson.GetTransactionCmd",
ResType: "btcjson.GetTransactionResult",
},
{
Method: "help",
Handler: "HelpNoChainRPC",
HandlerWithChain: "HelpWithChainRPC",
Cmd: "btcjson.HelpCmd",
ResType: "string",
},
{
Method: "importprivkey",
Handler: "ImportPrivKey",
Cmd: "*btcjson.ImportPrivKeyCmd",
ResType: "None",
},
{
Method: "keypoolrefill",
Handler: "KeypoolRefill",
Cmd: "*None",
ResType: "None",
},
{
Method: "listaccounts",
Handler: "ListAccounts",
Cmd: "*btcjson.ListAccountsCmd",
ResType: "map[string]float64",
},
{
Method: "listlockunspent",
Handler: "ListLockUnspent",
Cmd: "*None",
ResType: "[]btcjson.TransactionInput",
},
{
Method: "listreceivedbyaccount",
Handler: "ListReceivedByAccount",
Cmd: "*btcjson.ListReceivedByAccountCmd",
ResType: "[]btcjson.ListReceivedByAccountResult",
},
{
Method: "listreceivedbyaddress",
Handler: "ListReceivedByAddress",
Cmd: "*btcjson.ListReceivedByAddressCmd",
ResType: "btcjson.ListReceivedByAddressResult",
},
{
Method: "listsinceblock",
Handler: "ListSinceBlock",
HandlerWithChain: "ListSinceBlock",
Cmd: "btcjson.ListSinceBlockCmd",
ResType: "btcjson.ListSinceBlockResult",
},
{
Method: "listtransactions",
Handler: "ListTransactions",
Cmd: "*btcjson.ListTransactionsCmd",
ResType: "[]btcjson.ListTransactionsResult",
},
{
Method: "listunspent",
Handler: "ListUnspent",
Cmd: "*btcjson.ListUnspentCmd",
ResType: "[]btcjson.ListUnspentResult",
},
{
Method: "sendfrom",
Handler: "LockUnspent",
HandlerWithChain: "LockUnspent",
Cmd: "btcjson.LockUnspentCmd",
ResType: "bool",
},
{
Method: "sendmany",
Handler: "SendMany",
Cmd: "*btcjson.SendManyCmd",
ResType: "string",
},
{
Method: "sendtoaddress",
Handler: "SendToAddress",
Cmd: "*btcjson.SendToAddressCmd",
ResType: "string",
},
{
Method: "settxfee",
Handler: "SetTxFee",
Cmd: "*btcjson.SetTxFeeCmd",
ResType: "bool",
},
{
Method: "signmessage",
Handler: "SignMessage",
Cmd: "*btcjson.SignMessageCmd",
ResType: "string",
},
{
Method: "signrawtransaction",
Handler: "SignRawTransaction",
HandlerWithChain: "SignRawTransaction",
Cmd: "btcjson.SignRawTransactionCmd",
ResType: "btcjson.SignRawTransactionResult",
},
{
Method: "validateaddress",
Handler: "ValidateAddress",
Cmd: "*btcjson.ValidateAddressCmd",
ResType: "btcjson.ValidateAddressWalletResult",
},
{
Method: "verifymessage",
Handler: "VerifyMessage",
Cmd: "*btcjson.VerifyMessageCmd",
ResType: "bool",
},
{
Method: "walletlock",
Handler: "WalletLock",
Cmd: "*None",
ResType: "None",
},
{
Method: "walletpassphrase",
Handler: "WalletPassphrase",
Cmd: "*btcjson.WalletPassphraseCmd",
ResType: "None",
},
{
Method: "walletpassphrasechange",
Handler: "WalletPassphraseChange",
Cmd: "*btcjson.WalletPassphraseChangeCmd",
ResType: "None",
},
{
Method: "createnewaccount",
Handler: "CreateNewAccount",
Cmd: "*btcjson.CreateNewAccountCmd",
ResType: "None",
},
{
Method: "getbestblock",
Handler: "GetBestBlock",
Cmd: "*None",
ResType: "btcjson.GetBestBlockResult",
},
{
Method: "getunconfirmedbalance",
Handler: "GetUnconfirmedBalance",
Cmd: "*btcjson.GetUnconfirmedBalanceCmd",
ResType: "float64",
},
{
Method: "listaddresstransactions",
Handler: "ListAddressTransactions",
Cmd: "*btcjson.ListAddressTransactionsCmd",
ResType: "[]btcjson.ListTransactionsResult",
},
{
Method: "listalltransactions",
Handler: "ListAllTransactions",
Cmd: "*btcjson.ListAllTransactionsCmd",
ResType: "[]btcjson.ListTransactionsResult",
},
{
Method: "renameaccount",
Handler: "RenameAccount",
Cmd: "*btcjson.RenameAccountCmd",
ResType: "None",
},
{
Method: "walletislocked",
Handler: "WalletIsLocked",
Cmd: "*None",
ResType: "bool",
},
{
Method: "dropwallethistory",
Handler: "HandleDropWalletHistory",
Cmd: "*None",
ResType: "string",
},
}

43
cmd/wallet/genapi/log.go Normal file
View File

@@ -0,0 +1,43 @@
package main
import (
"github.com/p9c/p9/pkg/log"
"github.com/p9c/p9/version"
)
var subsystem = log.AddLoggerSubsystem(version.PathBase)
var F, E, W, I, D, T log.LevelPrinter = log.GetLogPrinterSet(subsystem)
func init() {
// to filter out this package, uncomment the following
// var _ = logg.AddFilteredSubsystem(subsystem)
// to highlight this package, uncomment the following
// var _ = logg.AddHighlightedSubsystem(subsystem)
// these are here to test whether they are working
// F.Ln("F.Ln")
// E.Ln("E.Ln")
// W.Ln("W.Ln")
// I.Ln("I.Ln")
// D.Ln("D.Ln")
// F.Ln("T.Ln")
// F.F("%s", "F.F")
// E.F("%s", "E.F")
// W.F("%s", "W.F")
// I.F("%s", "I.F")
// D.F("%s", "D.F")
// T.F("%s", "T.F")
// F.C(func() string { return "F.C" })
// E.C(func() string { return "E.C" })
// W.C(func() string { return "W.C" })
// I.C(func() string { return "I.C" })
// D.C(func() string { return "D.C" })
// T.C(func() string { return "T.C" })
// F.C(func() string { return "F.C" })
// E.Chk(errors.New("E.Chk"))
// W.Chk(errors.New("W.Chk"))
// I.Chk(errors.New("I.Chk"))
// D.Chk(errors.New("D.Chk"))
// T.Chk(errors.New("T.Chk"))
}

Some files were not shown because too many files have changed in this diff Show More