In order to follow Boolean logic rules, True doesn't have to be defined at all. With False = 0 as in all the languages I'm aware of, the assumption that True is anything that's <> False | 0 is sufficient to express every possible logical combination. The numeric representation of True as -1 for retro-BASICs and other languages with a long historical record is purely traditional and is strictly engine implementation specific; any representation would do as long as it is consistent and taken account of by the engine.
Note that C defines False as long 0, and True is in fact anything that's not False, with the internal numeric representation of True being 1. C++, C#, Visual Basic, and VB.NET follow the same pattern. FBSL follows it too.
Also note that, despite a particular numeric representation of True and False in a given language, many languages would not allow you direct arithmetics (hexa, decimal, octal, or binary) with Booleans or comparison of Boolean values for equality/inequality with numeric values where the sign of their numeric representation might have otherwise affected the result.
For the sake of compatibility with major modern BASICs and C/C++/C# and their derivatives, I'd recommend sticking with the C interpretation of Booleans when designing your own Boolean based branching in your apps.