These options control various sorts of optimizations.
Without any optimization option, the compiler's goal is to reduce the cost of compilation and to make debugging produce the expected results. Statements are independent: if you stop the program with a breakpoint between statements, you can then assign a new value to any variable or change the program counter to any other statement in the function and get exactly the results you would expect from the source code.
Turning on optimization flags makes the compiler attempt to improve the performance and/or code size at the expense of compilation time and possibly the ability to debug the program.
The compiler performs optimization based on the knowledge it has of the program. Optimization levels -O and above, in particular, enable unit-at-a-time mode, which allows the compiler to consider information gained from later functions in the file when compiling a function. Compiling multiple files at once to a single output file in unit-at-a-time mode allows the compiler to use information gained from all of the files when compiling each of them.
Not all optimizations are controlled directly by a flag. Only optimizations that have a flag are listed.
-O
-O1
Optimize. Optimizing compilation takes somewhat more time, and a lot more memory for a large function.
With -O, the compiler tries to reduce code size and execution time, without performing any optimizations that take a great deal of compilation time.
-O turns on the following optimization flags:
-fauto-inc-dec -fcprop-registers -fdce -fdefer-pop -fdelayed-branch -fdse -fguess-branch-probability -fif-conversion2 -fif-conversion -finline-small-functions -fipa-pure-const -fipa-reference -fmerge-constants -fsplit-wide-types -ftree-ccp -ftree-ch -ftree-copyrename -ftree-dce -ftree-dominator-opts -ftree-dse -ftree-fre -ftree-sra -ftree-ter -funit-at-a-time
-O also turns on -fomit-frame-pointer on machines
where doing so does not interfere with debugging.
-O2
Optimize even more. GCC performs nearly all supported optimizations that do not involve a space-speed tradeoff. The compiler does not perform loop unrolling or function inlining when you specify -O2. As compared to -O, this option increases both compilation time and the performance of the generated code.
-O2 turns on all optimization flags specified by -O. It also turns on the following optimization flags:
-fthread-jumps -falign-functions -falign-jumps -falign-loops -falign-labels -fcaller-saves -fcrossjumping -fcse-follow-jumps -fcse-skip-blocks -fdelete-null-pointer-checks -fexpensive-optimizations -fgcse -fgcse-lm -foptimize-sibling-calls -fpeephole2 -fregmove -freorder-blocks -freorder-functions -frerun-cse-after-loop -fsched-interblock -fsched-spec -fschedule-insns -fschedule-insns2 -fstrict-aliasing -fstrict-overflow -ftree-pre -ftree-vrp
Please note the warning under -fgcse about
invoking -O2 on programs that use computed gotos.
-O3
Optimize yet more. -O3 turns on all optimizations specified by
-O2 and also turns on the -finline-functions,
-funswitch-loops, -fpredictive-commoning,
-fgcse-after-reload and -ftree-vectorize
options.
-O0
Reduce compilation time and make debugging produce the expected
results. This is the default.
-Os
Optimize for size. -Os enables all -O2 optimizations that do not typically increase code size. It also performs further optimizations designed to reduce code size.
-Os disables the following optimization flags:
-falign-functions -falign-jumps -falign-loops -falign-labels -freorder-blocks -freorder-blocks-and-partition -fprefetch-loop-arrays -ftree-vect-loop-version
If you use multiple -O options, with or without level numbers, the last such option is the one that is effective.
Options of the form -f flag specify machine-independent flags. Most flags have both positive and negative forms; the negative form of -ffoo would be -fno-foo. In the table below, only one of the forms is listed—the one you typically will use. You can figure out the other form by either removing no- or adding it.
The following options control specific optimizations. They are either activated by -O options or are related to ones that are. You can use the following flags in the rare cases when “fine-tuning” of optimizations to be performed is desired.
-fno-default-inline
Do not make member functions inline by default merely because they are
defined inside the class scope (C++ only). Otherwise, when you specify
-O
, member functions defined inside class scope are compiled
inline by default; i.e., you don't need to add inline in front of
the member function name.
-fno-defer-pop
Always pop the arguments to each function call as soon as that function returns. For machines which must pop arguments after a function call, the compiler normally lets arguments accumulate on the stack for several function calls and pops them all at once.
Disabled at levels -O, -O2, -O3, -Os.
-fforward-propagate
Perform a forward propagation pass on RTL. The pass tries to combine two instructions and checks if the result can be simplified. If loop unrolling is active, two passes are performed and the second is scheduled after loop unrolling.
This option is enabled by default at optimization levels -O2,
-O3, -Os.
-fomit-frame-pointer
Don't keep the frame pointer in a register for functions that don't need one. This avoids the instructions to save, set up and restore frame pointers; it also makes an extra register available in many functions. It also makes debugging impossible on some machines.
On some machines, such as the VAX, this flag has no effect, because
the standard calling sequence automatically handles the frame pointer
and nothing is saved by pretending it doesn't exist. The
machine-description macro FRAME_POINTER_REQUIRED
controls
whether a target machine supports this flag. See Register Usage (GNU Compiler Collection (GCC) Internals).
Enabled at levels -O, -O2, -O3, -Os.
-foptimize-sibling-calls
Optimize sibling and tail recursive calls.
Enabled at levels -O2, -O3, -Os.
-fno-inline
Don't pay attention to the inline
keyword. Normally this option
is used to keep the compiler from expanding any functions inline.
Note that if you are not optimizing, no functions can be expanded inline.
-finline-small-functions
Integrate functions into their callers when their body is smaller than expected function call code (so overall size of program gets smaller). The compiler heuristically decides which functions are simple enough to be worth integrating in this way.
Enabled at level -O2.
-finline-functions
Integrate all simple functions into their callers. The compiler heuristically decides which functions are simple enough to be worth integrating in this way.
If all calls to a given function are integrated, and the function is
declared static
, then the function is normally not output as
assembler code in its own right.
Enabled at level -O3.
-finline-functions-called-once
Consider all static
functions called once for inlining into their
caller even if they are not marked inline
. If a call to a given
function is integrated, then the function is not output as assembler code
in its own right.
Enabled if -funit-at-a-time is enabled.
-fearly-inlining
Inline functions marked by always_inline
and functions whose body seems
smaller than the function call overhead early before doing
-fprofile-generate instrumentation and real inlining pass. Doing so
makes profiling significantly cheaper and usually inlining faster on programs
having large chains of nested wrapper functions.
Enabled by default.
-finline-limit=
n
By default, GCC limits the size of functions that can be inlined. This flag allows coarse control of this limit. n is the size of functions that can be inlined in number of pseudo instructions.
Inlining is actually controlled by a number of parameters, which may be specified individually by using --param name = value. The -finline-limit= n option sets some of these parameters as follows:
max-inline-insns-single
max-inline-insns-auto
See below for a documentation of the individual parameters controlling inlining and for the defaults of these parameters.
Note: there may be no value to -finline-limit that results in default behavior.
Note: pseudo instruction represents, in this particular context, an
abstract measurement of function's size. In no way does it represent a count
of assembly instructions and as such its exact meaning might change from one
release to an another.
-fkeep-inline-functions
In C, emit static
functions that are declared inline
into the object file, even if the function has been inlined into all
of its callers. This switch does not affect functions using the
extern inline
extension in GNU C89. In C++, emit any and all
inline functions into the object file.
-fkeep-static-consts
Emit variables declared static const
when optimization isn't turned
on, even if the variables aren't referenced.
GCC enables this option by default. If you want to force the compiler to
check if the variable was referenced, regardless of whether or not
optimization is turned on, use the -fno-keep-static-consts option.
-fmerge-constants
Attempt to merge identical constants (string constants and floating point constants) across compilation units.
This option is the default for optimized compilation if the assembler and linker support it. Use -fno-merge-constants to inhibit this behavior.
Enabled at levels -O, -O2, -O3, -Os.
-fmerge-all-constants
Attempt to merge identical constants and identical variables.
This option implies -fmerge-constants. In addition to
-fmerge-constants this considers e.g. even constant initialized
arrays or initialized constant variables with integral or floating point
types. Languages like C or C++ require each non-automatic variable to
have distinct location, so using this option will result in non-conforming
behavior.
-fmodulo-sched
Perform swing modulo scheduling immediately before the first scheduling
pass. This pass looks at innermost loops and reorders their
instructions by overlapping different iterations.
-fmodulo-sched-allow-regmoves
Perform more aggressive SMS based modulo scheduling with register moves
allowed. By setting this flag certain anti-dependences edges will be
deleted which will trigger the generation of reg-moves based on the
life-range analysis. This option is effective only with
-fmodulo-sched enabled.
-fno-branch-count-reg
Do not use “decrement and branch” instructions on a count register, but instead generate a sequence of instructions that decrement a register, compare it against zero, then branch based upon the result. This option is only meaningful on architectures that support such instructions, which include x86, PowerPC, IA-64 and S/390.
The default is -fbranch-count-reg.
-fno-function-cse
Do not put function addresses in registers; make each instruction that calls a constant function contain the function's address explicitly.
This option results in less efficient code, but some strange hacks that alter the assembler output may be confused by the optimizations performed when this option is not used.
The default is -ffunction-cse
-fno-zero-initialized-in-bss
If the target supports a BSS section, GCC by default puts variables that are initialized to zero into BSS. This can save space in the resulting code.
This option turns off this behavior because some programs explicitly rely on variables going to the data section. E.g., so that the resulting executable can find the beginning of that section and/or make assumptions based on that.
The default is -fzero-initialized-in-bss.
-fmudflap -fmudflapth -fmudflapir
For front-ends that support it (C and C++), instrument all risky
pointer/array dereferencing operations, some standard library
string/heap functions, and some other associated constructs with
range/validity tests. Modules so instrumented should be immune to
buffer overflows, invalid heap use, and some other subjects of C/C++
programming errors. The instrumentation relies on a separate runtime
library (libmudflap), which will be linked into a program if
-fmudflap is given at link time. Run-time behavior of the
instrumented program is controlled by the MUDFLAP_OPTIONS
environment variable. See env MUDFLAP_OPTIONS=-help a.out
for its options.
Use -fmudflapth instead of -fmudflap to compile and to
link if your program is multi-threaded. Use -fmudflapir, in
addition to -fmudflap or -fmudflapth, if
instrumentation should ignore pointer reads. This produces less
instrumentation (and therefore faster execution) and still provides
some protection against outright memory corrupting writes, but allows
erroneously read data to propagate within a program.
-fthread-jumps
Perform optimizations where we check to see if a jump branches to a location where another comparison subsumed by the first is found. If so, the first branch is redirected to either the destination of the second branch or a point immediately following it, depending on whether the condition is known to be true or false.
Enabled at levels -O2, -O3, -Os.
-fsplit-wide-types
When using a type that occupies multiple registers, such as long
long
on a 32-bit system, split the registers apart and allocate them
independently. This normally generates better code for those types,
but may make debugging more difficult.
Enabled at levels -O, -O2, -O3,
-Os.
-fcse-follow-jumps
In common subexpression elimination (CSE), scan through jump instructions
when the target of the jump is not reached by any other path. For
example, when CSE encounters an if
statement with an
else
clause, CSE will follow the jump when the condition
tested is false.
Enabled at levels -O2, -O3, -Os.
-fcse-skip-blocks
This is similar to -fcse-follow-jumps, but causes CSE to
follow jumps which conditionally skip over blocks. When CSE
encounters a simple if
statement with no else clause,
-fcse-skip-blocks causes CSE to follow the jump around the
body of the if
.
Enabled at levels -O2, -O3, -Os.
-frerun-cse-after-loop
Re-run common subexpression elimination after loop optimizations has been performed.
Enabled at levels -O2, -O3, -Os.
-fgcse
Perform a global common subexpression elimination pass. This pass also performs global constant and copy propagation.
Note: When compiling a program using computed gotos, a GCC extension, you may get better runtime performance if you disable the global common subexpression elimination pass by adding -fno-gcse to the command line.
Enabled at levels -O2, -O3, -Os.
-fgcse-lm
When -fgcse-lm is enabled, global common subexpression elimination will attempt to move loads which are only killed by stores into themselves. This allows a loop containing a load/store sequence to be changed to a load outside the loop, and a copy/store within the loop.
Enabled by default when gcse is enabled.
-fgcse-sm
When -fgcse-sm is enabled, a store motion pass is run after global common subexpression elimination. This pass will attempt to move stores out of loops. When used in conjunction with -fgcse-lm, loops containing a load/store sequence can be changed to a load before the loop and a store after the loop.
Not enabled at any optimization level.
-fgcse-las
When -fgcse-las is enabled, the global common subexpression elimination pass eliminates redundant loads that come after stores to the same memory location (both partial and full redundancies).
Not enabled at any optimization level.
-fgcse-after-reload
When -fgcse-after-reload is enabled, a redundant load elimination
pass is performed after reload. The purpose of this pass is to cleanup
redundant spilling.
-funsafe-loop-optimizations
If given, the loop optimizer will assume that loop indices do not
overflow, and that the loops with nontrivial exit condition are not
infinite. This enables a wider range of loop optimizations even if
the loop optimizer itself cannot prove that these assumptions are valid.
Using -Wunsafe-loop-optimizations, the compiler will warn you
if it finds this kind of loop.
-fcrossjumping
Perform cross-jumping transformation. This transformation unifies equivalent code and save code size. The resulting code may or may not perform better than without cross-jumping.
Enabled at levels -O2, -O3, -Os.
-fauto-inc-dec
Combine increments or decrements of addresses with memory accesses.
This pass is always skipped on architectures that do not have
instructions to support this. Enabled by default at -O and
higher on architectures that support this.
-fdce
Perform dead code elimination (DCE) on RTL.
Enabled by default at -O and higher.
-fdse
Perform dead store elimination (DSE) on RTL.
Enabled by default at -O and higher.
-fif-conversion
Attempt to transform conditional jumps into branch-less equivalents. This
include use of conditional moves, min, max, set flags and abs instructions, and
some tricks doable by standard arithmetics. The use of conditional execution
on chips where it is available is controlled by if-conversion2
.
Enabled at levels -O, -O2, -O3, -Os.
-fif-conversion2
Use conditional execution (where available) to transform conditional jumps into branch-less equivalents.
Enabled at levels -O, -O2, -O3, -Os.
-fdelete-null-pointer-checks
Use global dataflow analysis to identify and eliminate useless checks for null pointers. The compiler assumes that dereferencing a null pointer would have halted the program. If a pointer is checked after it has already been dereferenced, it cannot be null.
In some environments, this assumption is not true, and programs can safely dereference null pointers. Use -fno-delete-null-pointer-checks to disable this optimization for programs which depend on that behavior.
Enabled at levels -O2, -O3, -Os.
-fexpensive-optimizations
Perform a number of minor optimizations that are relatively expensive.
Enabled at levels -O2, -O3, -Os.
-foptimize-register-move
-fregmove
Attempt to reassign register numbers in move instructions and as operands of other simple instructions in order to maximize the amount of register tying. This is especially helpful on machines with two-operand instructions.
Note -fregmove and -foptimize-register-move are the same optimization.
Enabled at levels -O2, -O3, -Os.
-fdelayed-branch
If supported for the target machine, attempt to reorder instructions to exploit instruction slots available after delayed branch instructions.
Enabled at levels -O, -O2, -O3, -Os.
-fschedule-insns
If supported for the target machine, attempt to reorder instructions to eliminate execution stalls due to required data being unavailable. This helps machines that have slow floating point or memory load instructions by allowing other instructions to be issued until the result of the load or floating point instruction is required.
Enabled at levels -O2, -O3, -Os.
-fschedule-insns2
Similar to -fschedule-insns, but requests an additional pass of instruction scheduling after register allocation has been done. This is especially useful on machines with a relatively small number of registers and where memory load instructions take more than one cycle.
Enabled at levels -O2, -O3, -Os.
-fno-sched-interblock
Don't schedule instructions across basic blocks. This is normally
enabled by default when scheduling before register allocation, i.e.
with -fschedule-insns or at -O2 or higher.
-fno-sched-spec
Don't allow speculative motion of non-load instructions. This is normally
enabled by default when scheduling before register allocation, i.e.
with -fschedule-insns or at -O2 or higher.
-fsched-spec-load
Allow speculative motion of some load instructions. This only makes
sense when scheduling before register allocation, i.e. with
-fschedule-insns or at -O2 or higher.
-fsched-spec-load-dangerous
Allow speculative motion of more load instructions. This only makes
sense when scheduling before register allocation, i.e. with
-fschedule-insns or at -O2 or higher.
-fsched-stalled-insns
-fsched-stalled-insns=
n
Define how many insns (if any) can be moved prematurely from the queue
of stalled insns into the ready list, during the second scheduling pass.
-fno-sched-stalled-insns means that no insns will be moved
prematurely, -fsched-stalled-insns=0 means there is no limit
on how many queued insns can be moved prematurely.
-fsched-stalled-insns without a value is equivalent to
-fsched-stalled-insns=1.
-fsched-stalled-insns-dep
-fsched-stalled-insns-dep=
n
Define how many insn groups (cycles) will be examined for a dependency
on a stalled insn that is candidate for premature removal from the queue
of stalled insns. This has an effect only during the second scheduling pass,
and only if -fsched-stalled-insns is used.
-fno-sched-stalled-insns-dep is equivalent to
-fsched-stalled-insns-dep=0.
-fsched-stalled-insns-dep without a value is equivalent to
-fsched-stalled-insns-dep=1.
-fsched2-use-superblocks
When scheduling after register allocation, do use superblock scheduling algorithm. Superblock scheduling allows motion across basic block boundaries resulting on faster schedules. This option is experimental, as not all machine descriptions used by GCC model the CPU closely enough to avoid unreliable results from the algorithm.
This only makes sense when scheduling after register allocation, i.e. with
-fschedule-insns2 or at -O2 or higher.
-fsched2-use-traces
Use -fsched2-use-superblocks algorithm when scheduling after register allocation and additionally perform code duplication in order to increase the size of superblocks using tracer pass. See -ftracer for details on trace formation.
This mode should produce faster but significantly longer programs. Also
without -fbranch-probabilities the traces constructed may not
match the reality and hurt the performance. This only makes
sense when scheduling after register allocation, i.e. with
-fschedule-insns2 or at -O2 or higher.
-fsee
Eliminate redundant sign extension instructions and move the non-redundant
ones to optimal placement using lazy code motion (LCM).
-freschedule-modulo-scheduled-loops
The modulo scheduling comes before the traditional scheduling, if a loop
was modulo scheduled we may want to prevent the later scheduling passes
from changing its schedule, we use this option to control that.
-fcaller-saves
Enable values to be allocated in registers that will be clobbered by function calls, by emitting extra instructions to save and restore the registers around such calls. Such allocation is done only when it seems to result in better code than would otherwise be produced.
This option is always enabled by default on certain machines, usually those which have no call-preserved registers to use instead.
Enabled at levels -O2, -O3, -Os.
-ftree-reassoc
Perform reassociation on trees. This flag is enabled by default
at -O and higher.
-ftree-pre
Perform partial redundancy elimination (PRE) on trees. This flag is
enabled by default at -O2 and -O3.
-ftree-fre
Perform full redundancy elimination (FRE) on trees. The difference
between FRE and PRE is that FRE only considers expressions
that are computed on all paths leading to the redundant computation.
This analysis is faster than PRE, though it exposes fewer redundancies.
This flag is enabled by default at -O and higher.
-ftree-copy-prop
Perform copy propagation on trees. This pass eliminates unnecessary
copy operations. This flag is enabled by default at -O and
higher.
-ftree-salias
Perform structural alias analysis on trees. This flag
is enabled by default at -O and higher.
-fipa-pure-const
Discover which functions are pure or constant.
Enabled by default at -O and higher.
-fipa-reference
Discover which static variables do not escape cannot escape the
compilation unit.
Enabled by default at -O and higher.
-fipa-struct-reorg
Perform structure reorganization optimization, that change C-like structures layout in order to better utilize spatial locality. This transformation is affective for programs containing arrays of structures. Available in two compilation modes: profile-based (enabled with -fprofile-generate) or static (which uses built-in heuristics). Require -fipa-type-escape to provide the safety of this transformation. It works only in whole program mode, so it requires -fwhole-program and -combine to be enabled. Structures considered cold by this transformation are not affected (see --param struct-reorg-cold-struct-ratio= value).
With this flag, the program debug info reflects a new structure layout.
-fipa-pta
Perform interprocedural pointer analysis.
-fipa-cp
Perform interprocedural constant propagation.
This optimization analyzes the program to determine when values passed
to functions are constants and then optimizes accordingly.
This optimization can substantially increase performance
if the application has constants passed to functions, but
because this optimization can create multiple copies of functions,
it may significantly increase code size.
-fipa-matrix-reorg
Perform matrix flattening and transposing.
Matrix flattening tries to replace a m-dimensional matrix
with its equivalent n-dimensional matrix, where n < m.
This reduces the level of indirection needed for accessing the elements
of the matrix. The second optimization is matrix transposing that
attemps to change the order of the matrix's dimensions in order to
improve cache locality.
Both optimizations need fwhole-program flag.
Transposing is enabled only if profiling information is avaliable.
-ftree-sink
Perform forward store motion on trees. This flag is
enabled by default at -O and higher.
-ftree-ccp
Perform sparse conditional constant propagation (CCP) on trees. This
pass only operates on local scalar variables and is enabled by default
at -O and higher.
-ftree-store-ccp
Perform sparse conditional constant propagation (CCP) on trees. This
pass operates on both local scalar variables and memory stores and
loads (global variables, structures, arrays, etc). This flag is
enabled by default at -O2 and higher.
-ftree-dce
Perform dead code elimination (DCE) on trees. This flag is enabled by
default at -O and higher.
-ftree-dominator-opts
Perform a variety of simple scalar cleanups (constant/copy
propagation, redundancy elimination, range propagation and expression
simplification) based on a dominator tree traversal. This also
performs jump threading (to reduce jumps to jumps). This flag is
enabled by default at -O and higher.
-ftree-dse
Perform dead store elimination (DSE) on trees. A dead store is a store into
a memory location which will later be overwritten by another store without
any intervening loads. In this case the earlier store can be deleted. This
flag is enabled by default at -O and higher.
-ftree-ch
Perform loop header copying on trees. This is beneficial since it increases
effectiveness of code motion optimizations. It also saves one jump. This flag
is enabled by default at -O and higher. It is not enabled
for -Os, since it usually increases code size.
-ftree-loop-optimize
Perform loop optimizations on trees. This flag is enabled by default
at -O and higher.
-ftree-loop-linear
Perform linear loop transformations on tree. This flag can improve cache
performance and allow further loop optimizations to take place.
-fcheck-data-deps
Compare the results of several data dependence analyzers. This option
is used for debugging the data dependence analyzers.
-ftree-loop-im
Perform loop invariant motion on trees. This pass moves only invariants that
would be hard to handle at RTL level (function calls, operations that expand to
nontrivial sequences of insns). With -funswitch-loops it also moves
operands of conditions that are invariant out of the loop, so that we can use
just trivial invariantness analysis in loop unswitching. The pass also includes
store motion.
-ftree-loop-ivcanon
Create a canonical counter for number of iterations in the loop for that
determining number of iterations requires complicated analysis. Later
optimizations then may determine the number easily. Useful especially
in connection with unrolling.
-fivopts
Perform induction variable optimizations (strength reduction, induction
variable merging and induction variable elimination) on trees.
-ftree-parallelize-loops=n
Parallelize loops, i.e., split their iteration space to run in n threads.
This is only possible for loops whose iterations are independent
and can be arbitrarily reordered. The optimization is only
profitable on multiprocessor machines, for loops that are CPU-intensive,
rather than constrained e.g. by memory bandwidth. This option
implies -pthread, and thus is only supported on targets
that have support for -pthread.
-ftree-sra
Perform scalar replacement of aggregates. This pass replaces structure
references with scalars to prevent committing structures to memory too
early. This flag is enabled by default at -O and higher.
-ftree-copyrename
Perform copy renaming on trees. This pass attempts to rename compiler
temporaries to other variables at copy locations, usually resulting in
variable names which more closely resemble the original variables. This flag
is enabled by default at -O and higher.
-ftree-ter
Perform temporary expression replacement during the SSA->normal phase. Single
use/single def temporaries are replaced at their use location with their
defining expression. This results in non-GIMPLE code, but gives the expanders
much more complex trees to work on resulting in better RTL generation. This is
enabled by default at -O and higher.
-ftree-vectorize
Perform loop vectorization on trees. This flag is enabled by default at
-O3.
-ftree-vect-loop-version
Perform loop versioning when doing loop vectorization on trees. When a loop
appears to be vectorizable except that data alignment or data dependence cannot
be determined at compile time then vectorized and non-vectorized versions of
the loop are generated along with runtime checks for alignment or dependence
to control which version is executed. This option is enabled by default
except at level -Os where it is disabled.
-fvect-cost-model
Enable cost model for vectorization.
-ftree-vrp
Perform Value Range Propagation on trees. This is similar to the
constant propagation pass, but instead of values, ranges of values are
propagated. This allows the optimizers to remove unnecessary range
checks like array bound checks and null pointer checks. This is
enabled by default at -O2 and higher. Null pointer check
elimination is only done if -fdelete-null-pointer-checks is
enabled.
-ftracer
Perform tail duplication to enlarge superblock size. This transformation
simplifies the control flow of the function allowing other optimizations to do
better job.
-funroll-loops
Unroll loops whose number of iterations can be determined at compile
time or upon entry to the loop. -funroll-loops implies
-frerun-cse-after-loop. This option makes code larger,
and may or may not make it run faster.
-funroll-all-loops
Unroll all loops, even if their number of iterations is uncertain when
the loop is entered. This usually makes programs run more slowly.
-funroll-all-loops implies the same options as
-funroll-loops,
-fsplit-ivs-in-unroller
Enables expressing of values of induction variables in later iterations of the unrolled loop using the value in the first iteration. This breaks long dependency chains, thus improving efficiency of the scheduling passes.
Combination of -fweb and CSE is often sufficient to obtain the same effect. However in cases the loop body is more complicated than a single basic block, this is not reliable. It also does not work at all on some of the architectures due to restrictions in the CSE pass.
This optimization is enabled by default.
-fvariable-expansion-in-unroller
With this option, the compiler will create multiple copies of some
local variables when unrolling a loop which can result in superior code.
-fpredictive-commoning
Perform predictive commoning optimization, i.e., reusing computations (especially memory loads and stores) performed in previous iterations of loops.
This option is enabled at level -O3.
-fprefetch-loop-arrays
If supported by the target machine, generate instructions to prefetch memory to improve the performance of loops that access large arrays.
This option may generate better or worse code; results are highly dependent on the structure of loops within the source code.
Disabled at level -Os.
-fno-peephole
-fno-peephole2
Disable any machine-specific peephole optimizations. The difference between -fno-peephole and -fno-peephole2 is in how they are implemented in the compiler; some targets use one, some use the other, a few use both.
-fpeephole is enabled by default.
-fpeephole2 enabled at levels -O2, -O3, -Os.
-fno-guess-branch-probability
Do not guess branch probabilities using heuristics.
GCC will use heuristics to guess branch probabilities if they are not provided by profiling feedback (-fprofile-arcs). These heuristics are based on the control flow graph. If some branch probabilities are specified by __builtin_expect, then the heuristics will be used to guess branch probabilities for the rest of the control flow graph, taking the __builtin_expect info into account. The interactions between the heuristics and __builtin_expect can be complex, and in some cases, it may be useful to disable the heuristics so that the effects of __builtin_expect are easier to understand.
The default is -fguess-branch-probability at levels
-O, -O2, -O3, -Os.
-freorder-blocks
Reorder basic blocks in the compiled function in order to reduce number of taken branches and improve code locality.
Enabled at levels -O2, -O3.
-freorder-blocks-and-partition
In addition to reordering basic blocks in the compiled function, in order to reduce number of taken branches, partitions hot and cold basic blocks into separate sections of the assembly and .o files, to improve paging and cache locality performance.
This optimization is automatically turned off in the presence of
exception handling, for linkonce sections, for functions with a user-defined
section attribute and on any architecture that does not support named
sections.
-freorder-functions
Reorder functions in the object file in order to
improve code locality. This is implemented by using special
subsections .text.hot
for most frequently executed functions and
.text.unlikely
for unlikely executed functions. Reordering is done by
the linker so object file format must support named sections and linker must
place them in a reasonable way.
Also profile feedback must be available in to make this option effective. See -fprofile-arcs for details.
Enabled at levels -O2, -O3, -Os.
-fstrict-aliasing
Allows the compiler to assume the strictest aliasing rules applicable to
the language being compiled. For C (and C++), this activates
optimizations based on the type of expressions. In particular, an
object of one type is assumed never to reside at the same address as an
object of a different type, unless the types are almost the same. For
example, an unsigned int
can alias an int
, but not a
void*
or a double
. A character type may alias any other
type.
Pay special attention to code like this:
union a_union { int i; double d; }; int f() { a_union t; t.d = 3.0; return t.i; }
The practice of reading from a different union member than the one most recently written to (called “type-punning”) is common. Even with -fstrict-aliasing, type-punning is allowed, provided the memory is accessed through the union type. So, the code above will work as expected. See Structures unions enumerations and bit-fields implementation. However, this code might not:
int f() { a_union t; int* ip; t.d = 3.0; ip = &t.i; return *ip; }
Similarly, access by taking the address, casting the resulting pointer and dereferencing the result has undefined behavior, even if the cast uses a union type, e.g.:
int f() { double d = 3.0; return ((union a_union *) &d)->i; }
The -fstrict-aliasing option is enabled at levels
-O2, -O3, -Os.
-fstrict-overflow
Allow the compiler to assume strict signed overflow rules, depending
on the language being compiled. For C (and C++) this means that
overflow when doing arithmetic with signed numbers is undefined, which
means that the compiler may assume that it will not happen. This
permits various optimizations. For example, the compiler will assume
that an expression like i + 10 > i
will always be true for
signed i
. This assumption is only valid if signed overflow is
undefined, as the expression is false if i + 10
overflows when
using twos complement arithmetic. When this option is in effect any
attempt to determine whether an operation on signed numbers will
overflow must be written carefully to not actually involve overflow.
This option also allows the compiler to assume strict pointer
semantics: given a pointer to an object, if adding an offset to that
pointer does not produce a pointer to the same object, the addition is
undefined. This permits the compiler to conclude that p + u >
p
is always true for a pointer p
and unsigned integer
u
. This assumption is only valid because pointer wraparound is
undefined, as the expression is false if p + u
overflows using
twos complement arithmetic.
See also the -fwrapv option. Using -fwrapv means that integer signed overflow is fully defined: it wraps. When -fwrapv is used, there is no difference between -fstrict-overflow and -fno-strict-overflow for integers. With -fwrapv certain types of overflow are permitted. For example, if the compiler gets an overflow when doing arithmetic on constants, the overflowed value can still be used with -fwrapv, but not otherwise.
The -fstrict-overflow option is enabled at levels
-O2, -O3, -Os.
-falign-functions
-falign-functions=
n
Align the start of functions to the next power-of-two greater than n, skipping up to n bytes. For instance, -falign-functions=32 aligns functions to the next 32-byte boundary, but -falign-functions=24 would align to the next 32-byte boundary only if this can be done by skipping 23 bytes or less.
-fno-align-functions and -falign-functions=1 are equivalent and mean that functions will not be aligned.
Some assemblers only support this flag when n is a power of two; in that case, it is rounded up.
If n is not specified or is zero, use a machine-dependent default.
Enabled at levels -O2, -O3.
-falign-labels
-falign-labels=
n
Align all branch targets to a power-of-two boundary, skipping up to n bytes like -falign-functions. This option can easily make code slower, because it must insert dummy operations for when the branch target is reached in the usual flow of the code.
-fno-align-labels and -falign-labels=1 are equivalent and mean that labels will not be aligned.
If -falign-loops or -falign-jumps are applicable and are greater than this value, then their values are used instead.
If n is not specified or is zero, use a machine-dependent default which is very likely to be 1, meaning no alignment.
Enabled at levels -O2, -O3.
-falign-loops
-falign-loops=
n
Align loops to a power-of-two boundary, skipping up to n bytes like -falign-functions. The hope is that the loop will be executed many times, which will make up for any execution of the dummy operations.
-fno-align-loops and -falign-loops=1 are equivalent and mean that loops will not be aligned.
If n is not specified or is zero, use a machine-dependent default.
Enabled at levels -O2, -O3.
-falign-jumps
-falign-jumps=
n
Align branch targets to a power-of-two boundary, for branch targets where the targets can only be reached by jumping, skipping up to n bytes like -falign-functions. In this case, no dummy operations need be executed.
-fno-align-jumps and -falign-jumps=1 are equivalent and mean that loops will not be aligned.
If n is not specified or is zero, use a machine-dependent default.
Enabled at levels -O2, -O3.
-funit-at-a-time
Parse the whole compilation unit before starting to produce code. This allows some extra optimizations to take place but consumes more memory (in general). There are some compatibility issues with unit-at-a-time mode:
asm
statements
are emitted, and will likely break code relying on some particular
ordering. The majority of such top-level asm
statements,
though, can be replaced by section
attributes. The
fno-toplevel-reorder option may be used to keep the ordering
used in the input file, at the cost of some optimizations.
asm
statement refers directly to variables or functions
that are otherwise unused. In that case either the variable/function
shall be listed as an operand of the asm
statement operand or,
in the case of top-level asm
statements the attribute used
shall be used on the declaration.
asm
statements calling functions directly. Again,
attribute used
will prevent this behavior.
As a temporary workaround, -fno-unit-at-a-time can be used, but this scheme may not be supported by future releases of GCC.
Enabled at levels -O, -O2, -O3, -Os.
-fno-toplevel-reorder
Do not reorder top-level functions, variables, and asm
statements. Output them in the same order that they appear in the
input file. When this option is used, unreferenced static variables
will not be removed. This option is intended to support existing code
which relies on a particular ordering. For new code, it is better to
use attributes.
-fweb
Constructs webs as commonly used for register allocation purposes and assign each web individual pseudo register. This allows the register allocation pass to operate on pseudos directly, but also strengthens several other optimization passes, such as CSE, loop optimizer and trivial dead code remover. It can, however, make debugging impossible, since variables will no longer stay in a “home register”.
Enabled by default with -funroll-loops.
-fwhole-program
Assume that the current compilation unit represents whole program being
compiled. All public functions and variables with the exception of main
and those merged by attribute externally_visible
become static functions
and in a affect gets more aggressively optimized by interprocedural optimizers.
While this option is equivalent to proper use of static
keyword for
programs consisting of single file, in combination with option
--combine this flag can be used to compile most of smaller scale C
programs since the functions and variables become local for the whole combined
compilation unit, not for the single source file itself.
This option is not supported for Fortran programs.
-fcprop-registers
After register allocation and post-register allocation instruction splitting, we perform a copy-propagation pass to try to reduce scheduling dependencies and occasionally eliminate the copy.
Enabled at levels -O, -O2, -O3, -Os.
-fprofile-generate
Enable options usually used for instrumenting application to produce profile useful for later recompilation with profile feedback based optimization. You must use -fprofile-generate both when compiling and when linking your program.
The following options are enabled: -fprofile-arcs
, -fprofile-values
, -fvpt
.
-fprofile-use
Enable profile feedback directed optimizations, and optimizations generally profitable only with profile feedback available.
The following options are enabled: -fbranch-probabilities
, -fvpt
,
-funroll-loops
, -fpeel-loops
, -ftracer
By default, GCC emits an error message if the feedback profiles do not match the source code. This error can be turned into a warning by using -Wcoverage-mismatch. Note this may result in poorly optimized code.
The following options control compiler behavior regarding floating point arithmetic. These options trade off between speed and correctness. All must be specifically enabled.
-ffloat-store
Do not store floating point variables in registers, and inhibit other options that might change whether a floating point value is taken from a register or memory.
This option prevents undesirable excess precision on machines such as
the 68000 where the floating registers (of the 68881) keep more
precision than a double
is supposed to have. Similarly for the
x86 architecture. For most programs, the excess precision does only
good, but a few programs rely on the precise definition of IEEE floating
point. Use -ffloat-store for such programs, after modifying
them to store all pertinent intermediate computations into variables.
-ffast-math
Sets -fno-math-errno, -funsafe-math-optimizations, -ffinite-math-only, -fno-rounding-math, -fno-signaling-nans and -fcx-limited-range.
This option causes the preprocessor macro __FAST_MATH__
to be defined.
This option is not turned on by any -O option since
it can result in incorrect output for programs which depend on
an exact implementation of IEEE or ISO rules/specifications for
math functions. It may, however, yield faster code for programs
that do not require the guarantees of these specifications.
-fno-math-errno
Do not set ERRNO after calling math functions that are executed with a single instruction, e.g., sqrt. A program that relies on IEEE exceptions for math error handling may want to use this flag for speed while maintaining IEEE arithmetic compatibility.
This option is not turned on by any -O option since it can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO rules/specifications for math functions. It may, however, yield faster code for programs that do not require the guarantees of these specifications.
The default is -fmath-errno.
On Darwin systems, the math library never sets errno
. There is
therefore no reason for the compiler to consider the possibility that
it might, and -fno-math-errno is the default.
-funsafe-math-optimizations
Allow optimizations for floating-point arithmetic that (a) assume that arguments and results are valid and (b) may violate IEEE or ANSI standards. When used at link-time, it may include libraries or startup files that change the default FPU control word or other similar optimizations.
This option is not turned on by any -O option since it can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO rules/specifications for math functions. It may, however, yield faster code for programs that do not require the guarantees of these specifications. Enables -fno-signed-zeros, -fno-trapping-math, -fassociative-math and -freciprocal-math.
The default is -fno-unsafe-math-optimizations.
-fassociative-math
Allow re-association of operands in series of floating-point operations.
This violates the ISO C and C++ language standard by possibly changing
computation result. NOTE: re-ordering may change the sign of zero as
well as ignore NaNs and inhibit or create underflow or overflow (and
thus cannot be used on a code which relies on rounding behavior like
(x + 2**52) - 2**52)
. May also reorder floating-point comparisons
and thus may not be used when ordered comparisons are required.
This option requires that both -fno-signed-zeros and
-fno-trapping-math be in effect. Moreover, it doesn't make
much sense with -frounding-math.
The default is -fno-associative-math.
-freciprocal-math
Allow the reciprocal of a value to be used instead of dividing by
the value if this enables optimizations. For example x / y
can be replaced with x * (1/y)
which is useful if (1/y)
is subject to common subexpression elimination. Note that this loses
precision and increases the number of flops operating on the value.
The default is -fno-reciprocal-math.
-ffinite-math-only
Allow optimizations for floating-point arithmetic that assume that arguments and results are not NaNs or +-Infs.
This option is not turned on by any -O option since it can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO rules/specifications for math functions. It may, however, yield faster code for programs that do not require the guarantees of these specifications.
The default is -fno-finite-math-only.
-fno-signed-zeros
Allow optimizations for floating point arithmetic that ignore the signedness of zero. IEEE arithmetic specifies the behavior of distinct +0.0 and −0.0 values, which then prohibits simplification of expressions such as x+0.0 or 0.0*x (even with -ffinite-math-only). This option implies that the sign of a zero result isn't significant.
The default is -fsigned-zeros.
-fno-trapping-math
Compile code assuming that floating-point operations cannot generate user-visible traps. These traps include division by zero, overflow, underflow, inexact result and invalid operation. This option requires that -fno-signaling-nans be in effect. Setting this option may allow faster code if one relies on “non-stop” IEEE arithmetic, for example.
This option should never be turned on by any -O option since it can result in incorrect output for programs which depend on an exact implementation of IEEE or ISO rules/specifications for math functions.
The default is -ftrapping-math.
-frounding-math
Disable transformations and optimizations that assume default floating point rounding behavior. This is round-to-zero for all floating point to integer conversions, and round-to-nearest for all other arithmetic truncations. This option should be specified for programs that change the FP rounding mode dynamically, or that may be executed with a non-default rounding mode. This option disables constant folding of floating point expressions at compile-time (which may be affected by rounding mode) and arithmetic transformations that are unsafe in the presence of sign-dependent rounding modes.
The default is -fno-rounding-math.
This option is experimental and does not currently guarantee to
disable all GCC optimizations that are affected by rounding mode.
Future versions of GCC may provide finer control of this setting
using C99's FENV_ACCESS
pragma. This command line option
will be used to specify the default state for FENV_ACCESS
.
-frtl-abstract-sequences
It is a size optimization method. This option is to find identical
sequences of code, which can be turned into pseudo-procedures and
then replace all occurrences with calls to the newly created
subroutine. It is kind of an opposite of -finline-functions.
This optimization runs at RTL level.
-fsignaling-nans
Compile code assuming that IEEE signaling NaNs may generate user-visible traps during floating-point operations. Setting this option disables optimizations that may change the number of exceptions visible with signaling NaNs. This option implies -ftrapping-math.
This option causes the preprocessor macro __SUPPORT_SNAN__
to
be defined.
The default is -fno-signaling-nans.
This option is experimental and does not currently guarantee to
disable all GCC optimizations that affect signaling NaN behavior.
-fsingle-precision-constant
Treat floating point constant as single precision constant instead of
implicitly converting it to double precision constant.
-fcx-limited-range
When enabled, this option states that a range reduction step is not needed when performing complex division. The default is -fno-cx-limited-range, but is enabled by -ffast-math.
This option controls the default setting of the ISO C99
CX_LIMITED_RANGE
pragma. Nevertheless, the option applies to
all languages.
The following options control optimizations that may improve performance, but are not enabled by any -O options. This section includes experimental options that may produce broken code.
-fbranch-probabilities
After running a program compiled with -fprofile-arcs (see Options for Debugging Your Program or gcc ), you can compile it a second time using -fbranch-probabilities, to improve optimizations based on the number of times each branch was taken. When the program compiled with -fprofile-arcs exits it saves arc execution counts to a file called sourcename .gcda for each source file. The information in this data file is very dependent on the structure of the generated code, so you must use the same source code and the same optimization options for both compilations.
With -fbranch-probabilities, GCC puts a
REG_BR_PROB note on each JUMP_INSN and CALL_INSN.
These can be used to improve optimization. Currently, they are only
used in one place: in reorg.c, instead of guessing which path a
branch is mostly to take, the REG_BR_PROB values are used to
exactly determine which path is taken more often.
-fprofile-values
If combined with -fprofile-arcs, it adds code so that some data about values of expressions in the program is gathered.
With -fbranch-probabilities, it reads back the data gathered from profiling values of expressions and adds REG_VALUE_PROFILE notes to instructions for their later usage in optimizations.
Enabled with -fprofile-generate and -fprofile-use.
-fvpt
If combined with -fprofile-arcs, it instructs the compiler to add a code to gather information about values of expressions.
With -fbranch-probabilities, it reads back the data gathered
and actually performs the optimizations based on them.
Currently the optimizations include specialization of division operation
using the knowledge about the value of the denominator.
-frename-registers
Attempt to avoid false dependencies in scheduled code by making use of registers left over after register allocation. This optimization will most benefit processors with lots of registers. Depending on the debug information format adopted by the target, however, it can make debugging impossible, since variables will no longer stay in a “home register”.
Enabled by default with -funroll-loops.
-ftracer
Perform tail duplication to enlarge superblock size. This transformation simplifies the control flow of the function allowing other optimizations to do better job.
Enabled with -fprofile-use.
-funroll-loops
Unroll loops whose number of iterations can be determined at compile time or upon entry to the loop. -funroll-loops implies -frerun-cse-after-loop, -fweb and -frename-registers. It also turns on complete loop peeling (i.e. complete removal of loops with small constant number of iterations). This option makes code larger, and may or may not make it run faster.
Enabled with -fprofile-use.
-funroll-all-loops
Unroll all loops, even if their number of iterations is uncertain when
the loop is entered. This usually makes programs run more slowly.
-funroll-all-loops implies the same options as
-funroll-loops.
-fpeel-loops
Peels the loops for that there is enough information that they do not roll much (from profile feedback). It also turns on complete loop peeling (i.e. complete removal of loops with small constant number of iterations).
Enabled with -fprofile-use.
-fmove-loop-invariants
Enables the loop invariant motion pass in the RTL loop optimizer. Enabled
at level -O1
-funswitch-loops
Move branches with loop invariant conditions out of the loop, with duplicates
of the loop on both branches (modified according to result of the condition).
-ffunction-sections
-fdata-sections
Place each function or data item into its own section in the output file if the target supports arbitrary sections. The name of the function or the name of the data item determines the section's name in the output file.
Use these options on systems where the linker can perform optimizations to improve locality of reference in the instruction space. Most systems using the ELF object format and SPARC processors running Solaris 2 have linkers with such optimizations. AIX may have these optimizations in the future.
Only use these options when there are significant benefits from doing
so. When you specify these options, the assembler and linker will
create larger object and executable files and will also be slower.
You will not be able to use gprof
on all systems if you
specify this option and you may have problems with debugging if
you specify both this option and -g.
-fbranch-target-load-optimize
Perform branch target register load optimization before prologue / epilogue
threading.
The use of target registers can typically be exposed only during reload,
thus hoisting loads out of loops and doing inter-block scheduling needs
a separate optimization pass.
-fbranch-target-load-optimize2
Perform branch target register load optimization after prologue / epilogue
threading.
-fbtr-bb-exclusive
When performing branch target register load optimization, don't reuse
branch target registers in within any basic block.
-fstack-protector
Emit extra code to check for buffer overflows, such as stack smashing
attacks. This is done by adding a guard variable to functions with
vulnerable objects. This includes functions that call alloca, and
functions with buffers larger than 8 bytes. The guards are initialized
when a function is entered and then checked when the function exits.
If a guard check fails, an error message is printed and the program exits.
-fstack-protector-all
Like -fstack-protector except that all functions are protected.
-fsection-anchors
Try to reduce the number of symbolic address calculations by using shared “anchor” symbols to address nearby objects. This transformation can help to reduce the number of GOT entries and GOT accesses on some targets.
For example, the implementation of the following function foo
:
static int a, b, c; int foo (void) { return a + b + c; }
would usually calculate the addresses of all three variables, but if you compile it with -fsection-anchors, it will access the variables from a common anchor point instead. The effect is similar to the following pseudocode (which isn't valid C):
int foo (void) { register int *xr = &x; return xr[&a - &x] + xr[&b - &x] + xr[&c - &x]; }
Not all targets support this option.
--param
name
=
value
In some places, GCC uses various constants to control the amount of optimization that is done. For example, GCC will not inline functions that contain more that a certain number of instructions. You can control some of these constants on the command-line using the --param option.
The names of specific parameters, and the meaning of the values, are tied to the internals of the compiler, and are subject to change without notice in future releases.
In each case, the value is an integer. The allowable choices for name are given in the following table:
salias-max-implicit-fields
salias-max-array-elements
sra-max-structure-size
sra-field-structure-ratio
struct-reorg-cold-struct-ratio
max-crossjump-edges
min-crossjump-insns
max-grow-copy-bb-insns
max-goto-duplication-insns
max-delay-slot-insn-search
max-delay-slot-live-search
max-gcse-memory
max-gcse-passes
max-pending-list-length
max-inline-insns-single
max-inline-insns-auto
large-function-insns
large-function-growth
large-unit-insns
inline-unit-growth
large-stack-frame
large-stack-frame-growth
max-inline-insns-recursive
max-inline-insns-recursive-auto
For functions declared inline --param max-inline-insns-recursive is
taken into account. For function not declared inline, recursive inlining
happens only when -finline-functions (included in -O3) is
enabled and --param max-inline-insns-recursive-auto is used. The
default value is 450.
max-inline-recursive-depth
max-inline-recursive-depth-auto
For functions declared inline --param max-inline-recursive-depth is
taken into account. For function not declared inline, recursive inlining
happens only when -finline-functions (included in -O3) is
enabled and --param max-inline-recursive-depth-auto is used. The
default value is 8.
min-inline-recursive-probability
When profile feedback is available (see -fprofile-generate) the actual
recursion depth can be guessed from probability that function will recurse via
given call expression. This parameter limits inlining only to call expression
whose probability exceeds given threshold (in percents). The default value is
10.
inline-call-cost
min-vect-loop-bound
max-unrolled-insns
max-average-unrolled-insns
max-unroll-times
max-peeled-insns
max-peel-times
max-completely-peeled-insns
max-completely-peel-times
max-unswitch-insns
max-unswitch-level
lim-expensive
iv-consider-all-candidates-bound
iv-max-considered-uses
iv-always-prune-cand-set-bound
scev-max-expr-size
omega-max-vars
omega-max-geqs
omega-max-eqs
omega-max-wild-cards
omega-hash-table-size
omega-max-keys
omega-eliminate-redundant-constraints
vect-max-version-for-alignment-checks
vect-max-version-for-alias-checks
max-iterations-to-track
hot-bb-count-fraction
hot-bb-frequency-fraction
max-predicted-iterations
align-threshold
align-loop-iterations
tracer-dynamic-coverage
tracer-dynamic-coverage-feedback
The tracer-dynamic-coverage-feedback is used only when profile
feedback is available. The real profiles (as opposed to statically estimated
ones) are much less balanced allowing the threshold to be larger value.
tracer-max-code-growth
tracer-min-branch-ratio
tracer-min-branch-ratio
tracer-min-branch-ratio-feedback
Similarly to tracer-dynamic-coverage two values are present, one for
compilation for profile feedback and one for compilation without. The value
for compilation with profile feedback needs to be more conservative (higher) in
order to make tracer effective.
max-cse-path-length
max-cse-insns
max-aliased-vops
Notice that if a function contains more memory statements than the
value of this parameter, it is not really possible to achieve this
reduction. In this case, the compiler will use the number of memory
statements as the value for max-aliased-vops.
avg-aliased-vops
ggc-min-expand
The default is 30% + 70% * (RAM/1GB) with an upper bound of 100% when
RAM >= 1GB. If getrlimit
is available, the notion of "RAM" is
the smallest of actual RAM and RLIMIT_DATA
or RLIMIT_AS
. If
GCC is not able to calculate RAM on a particular platform, the lower
bound of 30% is used. Setting this parameter and
ggc-min-heapsize to zero causes a full collection to occur at
every opportunity. This is extremely slow, but can be useful for
debugging.
ggc-min-heapsize
The default is the smaller of RAM/8, RLIMIT_RSS, or a limit which
tries to ensure that RLIMIT_DATA or RLIMIT_AS are not exceeded, but
with a lower bound of 4096 (four megabytes) and an upper bound of
131072 (128 megabytes). If GCC is not able to calculate RAM on a
particular platform, the lower bound is used. Setting this parameter
very large effectively disables garbage collection. Setting this
parameter and ggc-min-expand to zero causes a full collection
to occur at every opportunity.
max-reload-search-insns
max-cselib-memory-locations
max-flow-memory-locations
reorder-blocks-duplicate
reorder-blocks-duplicate-feedback
The reorder-block-duplicate-feedback is used only when profile
feedback is available and may be set to higher values than
reorder-block-duplicate since information about the hot spots is more
accurate.
max-sched-ready-insns
max-sched-region-blocks
max-sched-region-insns
min-spec-prob
max-sched-extend-regions-iters
max-sched-insn-conflict-delay
sched-spec-prob-cutoff
max-last-value-rtl
integer-share-limit
min-virtual-mappings
virtual-mappings-ratio
ssp-buffer-size
max-jump-thread-duplication-stmts
max-fields-for-field-sensitive
prefetch-latency
simultaneous-prefetches
l1-cache-line-size
l1-cache-size
l2-cache-size
use-canonical-types
max-partial-antic-length
sccvn-max-scc-size