# 0x7FFFFFFFFFFFFFFF

18,446,744,073,709,551,615 redirects here, which is similarly 2 64 − 1
9223372036854775807
Cardinalnine quintillion two hundred twenty-three quadrillion three hundred seventy-two trillion thirty-six billion eight hundred fifty-four million seven hundred seventy-five thousand eight hundred seven
Ordinal9223372036854775807th
(nine quintillion two hundred twenty-three quadrillion three hundred seventy-two trillion thirty-six billion eight hundred fifty-four million seven hundred seventy-five thousand eight hundred seventh)
Factorization72 × 73 × 127 × 337 × 92737 × 649657
Greek numeral${\displaystyle {\stackrel {\mathrm {\sampi} \kappa \beta \gamma \tau \mathrm {o} \beta \tau \xi \eta \epsilon \upsilon \mathrm {o} \zeta }{\mathrm {M} }}}$͵εωζ´
Roman numeral${\displaystyle {\overset {ix}{MMMMMM}}\quad {\overset {ccxxiii}{MMMMM}}\quad {\overset {ccclxxii}{MMMM}}\quad }$ ${\displaystyle {\overset {xxxvi}{MMM}}\quad {\overset {dcccliv}{MM}}\quad {\overset {dcclxxv}{M}}\quad {\overset {}{DCCCVII}}}$

The number 9,223,372,036,854,775,807 is the integer equal to 263 − 1. Its prime factorization is 72 · 73 · 127 · 337 · 92737 · 649657, which is equal to Φ3(2) · Φ7(2) · Φ9(2) · Φ21(2) · Φ63(2).

## In computing[]

The number 9,223,372,036,854,775,807, equivalent to the hexadecimal value 7FFF,FFFF,FFFF,FFFF16, is the maximum value for a 64-bit signed integer in computing. It is therefore the maximum value for a variable declared as a long integer (long, long long int, or bigint) in many programming languages running on modern computers.[1][2][3] The presence of the value may reflect an integer overflow, or error.[4]

This value is also the largest positive signed address offset for 64-bit CPUs utilizing sign-extended memory addressing (such as the x86-64 architecture, which calls this "canonical form" extended addressing[5](p130)). Being an odd value, its appearance may reflect an erroneous (misaligned) memory address.

The C standard library data type time_t, used on operating systems such as Unix, is typically implemented as either a 32- or 64-bit signed integer value, counting the number of seconds since the start of the Unix epoch (midnight UTC of 1 January 1970).[6] Systems employing a 32-bit type are susceptible to the Year 2038 problem, so many implementations have moved to a wider 64-bit type, with a maximal value of 263−1 corresponding to a number of seconds 292 billion years from the start of Unix time.

The FILETIME value used in Windows is a 64-bit value corresponding to the number of 100-nanosecond intervals since midnight UTC of 1 January 1601. The latest time that can be represented using this value is 02:48:05.4775807 UTC on 14 September 30828 (corresponding to 9,223,372,036,854,775,807 100-nanosecond intervals since 1 January 1601).[7] Beyond this day, Windows will display an "invalid system time" error on startup.

Other systems encode system time as a signed 64-bit integer count of the number of ticks since some epoch date. On some systems (such as the Java standard library), each tick is one millisecond in duration, yielding a usable time range extending 292 million years into the future.

In C and C++ this is available as the long long data type signed or unsigned), in C#, long.[8] The constant is available as long.MaxValue. In the .NET Framework it is available as the Int64 struct.[9]

The unsigned equivalent is 18,446,744,073,709,551,615, which is one more than twice this number, or 264 − 1.